CN118457865A - Yacht control method and system and yacht - Google Patents
Yacht control method and system and yacht Download PDFInfo
- Publication number
- CN118457865A CN118457865A CN202410737893.1A CN202410737893A CN118457865A CN 118457865 A CN118457865 A CN 118457865A CN 202410737893 A CN202410737893 A CN 202410737893A CN 118457865 A CN118457865 A CN 118457865A
- Authority
- CN
- China
- Prior art keywords
- information
- yacht
- driver
- detection
- take
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B63—SHIPS OR OTHER WATERBORNE VESSELS; RELATED EQUIPMENT
- B63B—SHIPS OR OTHER WATERBORNE VESSELS; EQUIPMENT FOR SHIPPING
- B63B79/00—Monitoring properties or operating parameters of vessels in operation
- B63B79/30—Monitoring properties or operating parameters of vessels in operation for diagnosing, testing or predicting the integrity or performance of vessels
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B63—SHIPS OR OTHER WATERBORNE VESSELS; RELATED EQUIPMENT
- B63B—SHIPS OR OTHER WATERBORNE VESSELS; EQUIPMENT FOR SHIPPING
- B63B45/00—Arrangements or adaptations of signalling or lighting devices
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B63—SHIPS OR OTHER WATERBORNE VESSELS; RELATED EQUIPMENT
- B63B—SHIPS OR OTHER WATERBORNE VESSELS; EQUIPMENT FOR SHIPPING
- B63B79/00—Monitoring properties or operating parameters of vessels in operation
- B63B79/10—Monitoring properties or operating parameters of vessels in operation using sensors, e.g. pressure sensors, strain gauges or accelerometers
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B63—SHIPS OR OTHER WATERBORNE VESSELS; RELATED EQUIPMENT
- B63B—SHIPS OR OTHER WATERBORNE VESSELS; EQUIPMENT FOR SHIPPING
- B63B79/00—Monitoring properties or operating parameters of vessels in operation
- B63B79/40—Monitoring properties or operating parameters of vessels in operation for controlling the operation of vessels, e.g. monitoring their speed, routing or maintenance schedules
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B63—SHIPS OR OTHER WATERBORNE VESSELS; RELATED EQUIPMENT
- B63B—SHIPS OR OTHER WATERBORNE VESSELS; EQUIPMENT FOR SHIPPING
- B63B45/00—Arrangements or adaptations of signalling or lighting devices
- B63B2045/005—Arrangements or adaptations of signalling or lighting devices comprising particular electric circuits
Landscapes
- Chemical & Material Sciences (AREA)
- Engineering & Computer Science (AREA)
- Combustion & Propulsion (AREA)
- Mechanical Engineering (AREA)
- Ocean & Marine Engineering (AREA)
- Traffic Control Systems (AREA)
Abstract
The application is applicable to the technical field of yacht control, and particularly relates to a yacht control method, a yacht control system and yacht control equipment, wherein the yacht control method comprises the following steps: acquiring azimuth information of a driver; analyzing and processing based on the azimuth information to obtain the state information of the driver; obtaining yacht take-over detection information according to the state information; the yacht take-over detection information comprises information of a detection task of the yacht which is automatically taken over; detecting the periphery of the yacht after processing based on yacht take-over detection information to obtain yacht detection data; the yacht detection data are used for reflecting specific conditions in an area extending from the outer contour of the yacht to a direction away from the outer contour of the yacht by a preset distance; based on yacht detection data, a driver is reminded to take corresponding measures. The yacht control method provided by the application can discover potential risks in time, avoid accident rate of yacht as much as possible, and further improve the running safety and reliability of yacht.
Description
Technical Field
The application belongs to the technical field of yacht control, and particularly relates to a yacht control method and system and a yacht.
Background
Yacht is a high-grade durable consumer product integrating functions of navigation, sports, entertainment, leisure and the like, and is mainly used for private or commercial activities in offshore or inland water areas.
When the yacht in the related art is in navigation, the yacht is likely to collide under some conditions because of complex and changeable conditions in the sea, and the yacht is collided, so that the personal safety of tourists in the yacht is influenced, and the components of the yacht are also caused to break down, so that the yacht cannot normally run.
Disclosure of Invention
The embodiment of the application provides a yacht control method and system and a yacht, which can solve the problem that the yacht cannot normally run due to the fact that the yacht is possibly collided due to complex conditions in the sea and further the safety of tourists is influenced.
In a first aspect, an embodiment of the present application provides a yacht control method, including:
Acquiring azimuth information of a driver; wherein the azimuth information is used for indicating a specific position of the driver in the cockpit;
analyzing and processing based on the azimuth information to obtain the state information of the driver; wherein the status information includes information for reflecting a specific status of the driver when driving the yacht;
Obtaining yacht take-over detection information according to the state information; wherein the yacht take-over detection information comprises information of a detection task of the yacht which is automatically taken over;
Detecting the periphery of the yacht after processing based on the yacht connection pipe detection information to obtain yacht detection data; the yacht detection data are used for reflecting specific conditions in an area extending from the outer contour of the yacht to a direction away from the outer contour of the yacht by a preset distance;
Based on the yacht detection data, reminding a driver of making corresponding measures.
According to the yacht control method provided by the application, the azimuth information of the driver is acquired firstly, the information of the specific position of the driver in the cockpit can be determined in real time by automatically acquiring the azimuth information of the driver, the driver and the information of the cockpit can be acquired more conveniently according to the specific position information of the driver in the cockpit, subsequent analysis can be carried out, analysis processing is carried out based on the azimuth information, the state information of the driver is obtained, the specific azimuth information of the driver is analyzed by the data processing method, the current state of the driver can be accurately obtained, the state condition of the driver when the yacht is driven is known in detail, the analysis is more convenient, the state condition of the driver can be obtained more quickly and accurately, and whether the driver follows a safe route and an operation rule can be known, so that the accident risk is reduced, the state of the driver is known in real time, and the yacht can be responded quickly when necessary. According to the state information, the yacht take-over detection information is obtained, the state, the expression, even the micro expression and the like of a driver in the driving process can be detected more accurately, the more accurate yacht take-over detection information can be ensured to be obtained, further, whether the driver is in a state suitable for driving or not can be found timely through detecting the state of the driver, accidents caused by fatigue, inattention or other adverse factors are avoided, the state of the driver is systematically detected, real-time monitoring capability can be provided for yacht operators or management mechanisms, driving operation is ensured to meet safety standards, the periphery of a yacht is detected after processing based on the yacht take-over detection information, yacht detection data can be obtained, the periphery of the yacht can be automatically detected according to the information, the surrounding situation is detected through an automatic detection method, the detection is more thorough, the situation of the surrounding situation is detected, the marine instructor the situation outside a deep blind area is detected, and the safety is further improved. And finally, reminding a driver to take corresponding measures based on yacht detection data, and preventing potential danger and reducing the probability of accident occurrence through detection and analysis. The real-time analysis of yacht detection data can inform drivers of potential problems in real time, so that the drivers can respond quickly, timely reminding is helpful to avoid the situation that damage to the yacht is likely to happen, thereby reducing maintenance cost and property loss, ensuring yacht operation safety and protecting passengers from injury. Therefore, the accident rate of the yacht can be reduced while the yacht is operated by the control method, the influence on the personal safety of tourists in the yacht can be greatly reduced while the yacht is collided, and the components of the yacht are prevented from being failed.
In a possible implementation manner of the first aspect, before the acquiring the position information of the driver, the method further includes:
Acquiring an intra-cabin image of the cockpit; wherein the intra-cabin image is used for reflecting specific conditions in the cockpit;
Detecting and analyzing based on the images in the cockpit to obtain the existence condition of personnel in the cockpit; wherein the personnel presence condition is used to indicate whether the driver is present in the cockpit;
Obtaining yacht takeover information under the condition that the existence condition of the personnel indicates that the driver does not exist in the cockpit; the yacht take-over information comprises information of a detection task of the yacht and a running task of the yacht which are automatically taken over;
And controlling the yacht based on the yacht takeover information and reminding a driver to take corresponding measures.
In a possible implementation manner of the first aspect, the analyzing, based on the azimuth information, obtains status information of the driver, including:
determining and analyzing based on the azimuth information to obtain the face orientation information of the driver; wherein the face orientation information is used for reflecting a specific direction in which the face of the driver is oriented;
Obtaining front and rear information of the driver according to the face orientation information; wherein the front-rear information includes information of the front and rear sides of the body of the driver;
obtaining body information of the driver according to the front and rear information of the driver; wherein the physical information is used to indicate a specific condition of the whole body of the driver;
and carrying out identification processing on the body information to obtain the state information of the driver.
In a possible implementation manner of the first aspect, the obtaining yacht takeover detection information according to the status information includes:
Performing header feature recognition processing based on the state information to obtain header information; wherein the header information includes information reflecting characteristics exhibited by the driver's face while driving the yacht;
performing upper body recognition processing of the driver based on the state information to obtain upper body information; wherein the upper body information includes information reflecting characteristics exhibited by the upper limbs of the driver while driving the yacht;
Performing lower body recognition processing of the driver based on the state information to obtain lower body information; the lower body information comprises information used for reflecting characteristics exhibited by lower limbs of the driver when driving the yacht;
And carrying out comprehensive analysis processing based on the header information, the upper body information and the lower body information to obtain the yacht take-over detection information.
In a possible implementation manner of the first aspect, the performing comprehensive analysis processing based on the header information, the upper body information and the lower body information to obtain the yacht takeover detection information includes:
Analyzing, processing and classifying based on the header information, the upper body information and the lower body information to obtain header classification information, upper body classification information and lower body classification information; the head classification information is used for reflecting the characteristics of the face of the driver when the yacht is driven, the upper body classification information is used for reflecting the characteristics of the upper limbs of the driver when the yacht is driven, and the lower body classification information is used for reflecting the characteristics of the lower limbs of the driver when the yacht is driven;
and carrying out classification analysis processing on the head classification information, the upper body classification information and the lower body classification information to obtain the yacht take-over detection information.
In a possible implementation manner of the first aspect, the performing a classification analysis on the header classification information, the upper body classification information, and the lower body classification information to obtain the yacht takeover detection information includes:
Performing classification analysis processing on the header classification information, the upper body classification information and the lower body classification information to obtain header range information, upper body range information and lower body range information; the head range information is used for indicating the condition of the range occupied by the display state of the face of the driver, the upper body range information is used for indicating the condition of the range occupied by the display state of the upper limbs of the driver, and the lower body range information is used for indicating the condition of the range occupied by the display state of the lower limbs of the driver;
And obtaining the yacht take-over detection information according to the header range information, the upper body range information and the lower body range information.
In a possible implementation manner of the first aspect, the detecting the periphery of the yacht after processing based on the yacht connection detection information to obtain yacht detection data includes:
detecting the periphery of the yacht after processing the yacht take-over detection information to obtain first detection information and second detection information; wherein the first detection information includes information for reflecting a kind of an obstacle or a type of the obstacle in a detection area, and the second detection information includes information for reflecting a distance between the obstacle and an outline of the yacht;
Performing combination processing based on the first detection information and the second detection information to obtain total detection information; wherein the total detection information is used for reflecting specific types and specific positions of the obstacles;
And obtaining the yacht detection data according to the total detection information.
In a possible implementation manner of the first aspect, the reminding the driver to make the corresponding measure based on the yacht detection data includes:
Acquiring real-time information of the yacht; wherein the real-time information includes information for reflecting a moving speed, a moving direction and a moving track when the yacht travels;
Processing and analyzing based on the real-time information and the yacht detection data to obtain real-time influence information; the real-time influence information is used for reflecting the influence condition between the obstacle and the yacht;
Obtaining movement improvement data according to the real-time influence information; wherein the movement improvement data includes data for reflecting adjustment and improvement of a movement direction, a movement speed, and a movement trajectory of the yacht;
and reminding a driver of taking corresponding measures according to the movement improvement data.
In a second aspect, an embodiment of the present application provides a yacht control system comprising:
An acquisition unit configured to acquire azimuth information of a driver; wherein the azimuth information is used for indicating a specific position of the driver in the cockpit;
The analysis unit is used for carrying out analysis processing based on the azimuth information to obtain the state information of the driver; wherein the status information includes information for reflecting a specific status of the driver when driving the yacht;
the first obtaining unit is used for obtaining the yacht take-over detection information according to the state information; wherein the yacht take-over detection information comprises information of a detection task of the yacht which is automatically taken over;
The second obtaining unit is used for detecting the periphery of the yacht after processing based on the yacht take-over detection information to obtain yacht detection data; the yacht detection data are used for reflecting specific conditions in an area extending from the outer contour of the yacht to a direction away from the outer contour of the yacht by a preset distance;
And the reminding unit is used for reminding a driver of making corresponding measures based on the yacht detection data.
In a third aspect, embodiments of the present application provide a yacht comprising a memory, a processor and a computer program stored in the memory and executable on the processor, which when executed by the processor implements a method as described in any one of the first aspects above.
In a fourth aspect, embodiments of the present application provide a computer program product which, when run on a yacht, causes the yacht to perform the yacht control method as described in any one of the first aspects above.
It will be appreciated that the advantages of the second to fourth aspects may be found in the relevant description of the first aspect and are not repeated here.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments or the description of the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a yacht control method according to an embodiment of the present application;
Fig. 2 is a schematic diagram of an implementation flow of steps S100A to S100D in a yacht control method according to an embodiment of the present application;
Fig. 3 is a schematic implementation flow chart of step S200 in a yacht control method according to an embodiment of the present application;
fig. 4 is a schematic implementation flow chart of step S300 in the yacht control method according to an embodiment of the present application;
fig. 5 is a schematic diagram of an implementation flow of step S340 in a yacht control method according to an embodiment of the present application;
fig. 6 is a schematic diagram of an implementation flow of step S342 in a yacht control method according to an embodiment of the present application;
Fig. 7 is a schematic flowchart of an implementation of step S400 in a yacht control method according to an embodiment of the present application;
Fig. 8 is a schematic diagram of an implementation flow of step S500 in a yacht control method according to an embodiment of the present application;
FIG. 9 is a schematic diagram of a yacht control system according to an embodiment of the present application;
fig. 10 is a schematic structural view of a yacht control device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
As used in the present description and the appended claims, the term "if" may be interpreted as "when..once" or "in response to a determination" or "in response to detection" depending on the context. Similarly, the phrase "if a determination" or "if a [ described condition or event ] is detected" may be interpreted in the context of meaning "upon determination" or "in response to determination" or "upon detection of a [ described condition or event ]" or "in response to detection of a [ described condition or event ]".
Furthermore, the terms "first," "second," "third," and the like in the description of the present specification and in the appended claims, are used for distinguishing between descriptions and not necessarily for indicating or implying a relative importance.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
Yacht is a high-grade durable consumer product integrating functions of navigation, sports, entertainment, leisure and the like, and is mainly used for private or commercial activities in offshore or inland water areas.
When the yacht in the related art is in navigation, the yacht is likely to collide under some conditions because of complex and changeable conditions in the sea, and the yacht is collided, so that the personal safety of tourists in the yacht is influenced, and the components of the yacht are also caused to break down, so that the yacht cannot normally run.
In order to solve the problems, the embodiment of the application provides a yacht control method, a yacht control system and a yacht. According to the method, the azimuth information of the driver is acquired firstly, the information of the specific position of the driver in the cockpit can be determined in real time by automatically acquiring the azimuth information of the driver, the driver and the information of the cockpit can be acquired more conveniently according to the specific position information of the driver in the cockpit, subsequent analysis can be carried out, next, the state information of the driver is acquired based on the azimuth information, the specific azimuth information of the driver is analyzed by a data processing method, the current state of the driver can be accurately acquired, the state condition of the driver when the yacht is driven is known in detail, the analysis is more convenient, the state condition of the driver is acquired more quickly and accurately, whether the driver follows a safety route and an operation rule can be known, the accident risk is reduced, the state of the driver is known in real time, and the method is helpful for quick response when necessary. According to the state information, the yacht take-over detection information is obtained, the state, the expression, even the micro expression and the like of a driver in the driving process can be detected more accurately, the more accurate yacht take-over detection information can be ensured to be obtained, further, whether the driver is in a state suitable for driving or not can be found timely through detecting the state of the driver, accidents caused by fatigue, inattention or other adverse factors are avoided, the state of the driver is systematically detected, real-time monitoring capability can be provided for yacht operators or management mechanisms, driving operation is ensured to meet safety standards, the periphery of a yacht is detected after processing based on the yacht take-over detection information, yacht detection data can be obtained, the periphery of the yacht can be automatically detected according to the information, the surrounding situation is detected through an automatic detection method, the detection is more thorough, the situation of the surrounding situation is detected, the marine instructor the situation outside a deep blind area is detected, and the safety is further improved. And finally, reminding a driver to take corresponding measures based on yacht detection data, and preventing potential danger and reducing the probability of accident occurrence through detection and analysis. The real-time analysis of yacht detection data can inform drivers of potential problems in real time, so that the drivers can respond quickly, timely reminding is helpful to avoid the situation that damage to the yacht is likely to happen, thereby reducing maintenance cost and property loss, ensuring yacht operation safety and protecting passengers from injury. Therefore, the accident rate of the yacht can be reduced while the yacht is operated by the control method, the influence on the personal safety of tourists in the yacht can be greatly reduced while the yacht is collided, and the components of the yacht are prevented from being failed.
The yacht control method provided by the embodiment of the application can be applied to yachts, and the yachts are the execution main body of the yacht control method provided by the embodiment of the application, and the embodiment of the application does not limit the specific type of yachts.
For example, the yacht may be a luxury yacht, a general yacht or a business yacht, etc., and the yacht is equipped with an advanced navigation system and safety equipment. When the yacht control method is used for offshore operation, the yacht control method provided by the embodiment of the application is integrated in the yacht control system, so that the azimuth information of a driver can be obtained in real time, the state of the driver is analyzed, and the surrounding environment of the yacht is detected based on the information. When the potential danger is detected, the control system can prompt a driver to take corresponding measures in time, so that safe operation of the yacht is ensured.
In order to better understand the yacht control method provided by the embodiment of the present application, the following exemplary description is provided for a specific implementation procedure of the yacht control method provided by the embodiment of the present application.
Fig. 1 shows a schematic flow chart of a yacht control method provided by an embodiment of the present application, where the yacht control method includes:
S100, acquiring azimuth information of a driver; wherein the azimuth information is used to indicate the specific position of the driver within the cockpit.
For example, the position information of the driver may be obtained by capturing an image of the driver's cabin with a camera and recognizing the position of the driver in the image by using an image processing technique. In addition, other sensors or devices, such as radar, infrared sensors, etc., may be utilized to obtain the driver's position information. The position information of the driver can be acquired through GPS so as to acquire the azimuth information of the driver; the acquired position information may include the driver's coordinate position, distance or angle with respect to the cockpit, etc., for subsequent analysis.
In one possible implementation, referring to fig. 2, before the obtaining the azimuth information of the driver, the method further includes:
S100A, acquiring an intra-cabin image of a cockpit; wherein the in-cabin image is used to reflect the specific conditions within the cockpit.
For example, an image in the cockpit can be acquired by an image acquisition device, an image sensor or the like to acquire an image of a specific situation in the cockpit; the real-time image in the cockpit can be captured through a camera, and the camera can be arranged at the top position in the cockpit, the position capable of freely changing the angle or the position capable of shooting the whole cockpit so as to capture the field of view of the whole cockpit.
S100B, detecting and analyzing based on the cabin images to obtain the existence condition of personnel in the cockpit; wherein the personnel presence is used to indicate whether a driver is present in the cockpit.
It will be appreciated that after the in-cabin image is determined, the in-cabin image is analyzed and image analysis techniques are used to determine the driver's position in the image. For example, the body contour of the driver can be identified by using an image identification technology, the coordinate position of the body contour in the image can be determined, the specific characteristic of the driver, such as clothing color, can be further identified by performing depth analysis on the image, and the behavior and action of the driver can be identified besides the specific characteristic of the driver by performing depth analysis on the acquired cabin image. For example, gestures, head movements, body gestures, etc. of the driver may be analyzed to more fully understand the presence of the driver, and deep analysis techniques may be based on advanced machine learning algorithms or deep learning models, with a large amount of training data to improve accuracy and reliability of recognition.
S100C, under the condition that the existence of personnel indicates that no driver exists in the cockpit, yacht take-over information is obtained; the yacht take-over information comprises information of a detection task of the yacht which is automatically taken over and a running task of the yacht.
It can be appreciated that when no driver exists in the image in the analysis detection cabin, no information is fed back, and the take-over mechanism of the yacht is automatically triggered through the feeding back of the information. The yacht take-over information is used for recording the information of the detection task and the running task of the yacht in the automatic take-over process in detail. The detection tasks may include checking whether various devices of the yacht are operating properly, such as engines, navigation systems, safety devices, etc. The driving task involves the autopilot of the yacht to automatically navigate and drive according to a preset route or target location.
The yacht take-over information recording device is beneficial to follow-up analysis and evaluation of yacht operation states, improves safety and reliability of the yacht, ensures that the yacht can be safely and stably transited to an automatic driving mode when a driver cannot continue driving, and accordingly avoids potential safety risks. Meanwhile, the yacht detection data are analyzed in real time, so that a driver is reminded of making corresponding measures in time, and the safety and reliability of yacht operation are further improved.
S100D, controlling the yacht based on yacht takeover information and reminding a driver of making corresponding measures.
It will be appreciated that controlling the yacht through yacht take over information, since yachts typically have multiple driving modes, including manual driving mode, semi-manual driving mode and automatic driving mode, obtaining the driving mode information of the yacht helps to determine whether the yacht has the ability to take over driving automatically. If the yacht is currently in an autopilot mode, the yacht will have greater autonomy and flexibility to cope with situations where the driver is unable to continue driving. If the yacht is in manual driving mode, the process of automatically taking over may require more driving judgment and driving capability to manually drive, ensuring that the yacht can be stably operated according to a preset course or mission without being driven by a person. Meanwhile, the running task information in the yacht take over information can also be used for generating reminding information of a driver so as to remind the driver to timely return to the cockpit, remind the driver of paying attention to the running state of the yacht and take corresponding measures when necessary, wherein the reminding mode can be reminding through an alarm, broadcasting, mobile phone information or a watch associated with a control center of the yacht, and can be transmitted to the driver through a display screen, a sound prompt or other communication modes in the yacht, but the yacht taking over information is not limited to the reminding mode.
S200, analyzing and processing based on the azimuth information to obtain the state information of the driver; wherein the status information includes information for reflecting a specific status of the driver when driving the yacht.
Illustratively, the missing values may be processed by first removing noise from the position information and cleaning up invalid data. All data are synchronized on the time axis. The data is normalized or standardized, and the statistical characteristics such as the position, the speed, the mean value, the variance, the maximum value, the minimum value and the like of the acceleration are extracted, the frequency domain characteristics are extracted, and the frequency characteristics such as the main frequency components, the energy distribution and the like are extracted through methods such as Fourier transformation and the like. Finally, behavior characteristics are extracted, and the state behaviors of the driver are analyzed. The azimuth information of the driver can be analyzed by using a machine learning algorithm or an image recognition technology, so that the gesture, the action, the expression and the like of the driver are recognized, and the driving state of the driver is deduced. The status information may include whether the driver is in a tired state, is focused on, is following a safety operating protocol, and the like. Through analysis of the state information, possible problems or risks of the driver in the driving process can be found in time.
The method is favorable for timely finding and processing possible problems and risks of the driver in the yacht driving process through the analysis and processing of the azimuth information, so that the running safety of the yacht and the physical health of the driver are guaranteed, the state information of the driver is obtained through the analysis and processing of the azimuth information, and the monitoring and evaluation of the driving behaviors of the driver are further realized. The yacht driving safety control system is beneficial to improving the yacht driving safety, can provide more personalized driving assistance and guidance for a driver, and improves driving experience.
In one possible implementation, referring to fig. 3, S200, the analyzing process based on the azimuth information to obtain the state information of the driver includes:
S210, determining and analyzing based on the azimuth information to obtain the face orientation information of the driver; wherein the face orientation information is used to reflect the specific direction in which the driver's face is oriented.
For example, data feature extraction may be performed based on the azimuth information, where the azimuth information is first noise-removed and invalid data is cleaned up, and the missing values are processed to synchronize the data on the time axis. The data is normalized or standardized, frequency domain characteristics are extracted, frequency characteristics such as main frequency components, energy distribution and the like are extracted through Fourier transformation and the like, finally behavior characteristics are extracted to acquire and analyze the face orientation information of the driver, and the face characteristics of the driver can be identified from the in-cabin image and the position of the face characteristics in the image can be determined through the azimuth information identification technology. The driver's face orientation may then be further analyzed in conjunction with deep learning and computer vision algorithms, from which key points of the facial features, such as eyes, nose, mouth, etc., may be identified and by analyzing the relative positions and directions of these key points, the driver's face orientation may be determined. Thereby obtaining the specific position of the head; face orientation information is critical to assessing the driving state of the driver. For example, if the driver's face is oriented away from the road ahead or the driving instrument, this may mean that the driver is not focused or distracted. The face orientation information is monitored and analyzed in real time, so that the driver can be timely found and reminded of adjusting the attention, and the driving safety is ensured. In addition, the facial orientation information may be combined with other sensor data, such as steering wheel angle, yacht speed, etc., to more fully assess the driving behavior and status of the driver. For example, if the driver's face orientation does not match the steering wheel angle, this may indicate that the driver is behaving improperly while operating, requiring further attention and prompting.
S220, obtaining front and rear information of a driver according to the face orientation information; wherein the front-rear information includes information of the front and rear sides of the body of the driver.
For example, the specific position of the face on the body can be obtained through the obtained face orientation information, and the front-back position of the driver can be determined according to the characteristics of the specific position of the face, such as the nose, the mouth, the eyes, the forehead and the like (the front and the back of the body of the driver are the face on which the nose, the mouth or the ears are located, and the back of the driver is the face on which the back of the driver is located); for example, it is understood that the nose, mouth, eyes and forehead of a person are located in the front of the person, and the front can be determined by the characteristic information of the nose and mouth, and the back, spine and the like are located in the back of the person, and the back is compared with the front. The convexity of the front is higher, so the information of the front and the back of the body of the driver is determined according to the convexity characteristics of the nose and the mouth, so that the front and the back information of the driver can be clearly and clearly obtained.
S230, according to the front and rear information of the driver, obtaining the body information of the driver; wherein the physical information is used for indicating the specific condition of the whole body of the driver.
Illustratively, noise and invalid data are first removed and missing values are processed. The method comprises the steps of synchronizing data from different sensors on a time axis, carrying out normalization or standardization processing on the data so as to adapt to subsequent algorithm analysis, extracting information such as body key points, head gestures, hand positions, foot positions and the like by using a computer vision technology, manually marking the driving state of the acquired information, further obtaining the body information of a driver according to front and rear information of the driver, wherein the body information can comprise the gesture, the posture, the action and the like of the driver, the body information can be helpful for comprehensively evaluating the driving state and the comfort level of the driver, and further can judge whether the driver keeps the correct driving gesture or not and whether the fatigue or uncomfortable sign appears by analyzing the body information so as to further analyze the driving behavior of the driver after the body information of the driver is obtained. For example, the action smoothness, the reaction speed and the coping capability of the driver to the emergency in the driving process can be observed, and the real-time monitoring and early warning of the physical state of the driver can be realized by combining other sensor data and algorithm technologies, so that the driving safety and the driving comfort are further improved.
S240, carrying out recognition processing on the body information to obtain the state information of the driver.
By way of example, the identification process of the body information may further utilize machine learning or deep learning algorithms to extract and analyze key features in the body information, where such features may include the driver's gesture stability, frequency of action, etc., which can reflect the driver's driving state, level of alertness, and possibly risk. Through in-depth analysis and pattern recognition of these features, status information of the driver can be obtained, such as whether to fatigue, whether to concentrate on, whether to follow the operation procedure, etc. Such status information is critical to assessing the driver's driving ability, predicting potential risk, and providing personalized driving assistance and guidance. Through real-time monitoring and analysis of the state information of the driver, possible problems and risks of the driver in the yacht driving process are found and processed in time, so that the running safety of the yacht and the physical health of the driver are guaranteed. At the same time, this information can also be used to optimize the driving experience, improving the comfort and satisfaction of the driver. Therefore, the body information is identified and processed to obtain the state information of the driver, which is a key step for realizing intelligent driving monitoring and evaluation.
Information such as facial key points, head gestures, eye states (open/closed), hand positions, etc. are extracted using computer vision techniques. The mean, variance, frequency characteristics, etc. are extracted from heart rate, respiratory rate, skin conductivity, etc. The driver's behavior patterns, such as head movements, blink frequency, yawning, etc., are analyzed. Different physical states (such as attention, fatigue, distraction) of the driver are noted through laboratory simulation or actual driving data. The method comprises the steps of selecting the most representative features for model training, using supervised learning (such as a support vector machine, a random forest and a neural network) or unsupervised learning (such as a clustering algorithm) models to train out a model capable of identifying the state of a driver, using methods such as cross validation and the like to validate and tune the model, ensuring the accuracy and the robustness of the model, inputting body information acquired in real time into the trained model, and judging the state information of the driver in real time.
S300, obtaining yacht take-over detection information according to the state information; the yacht take-over detection information comprises information of a detection task of the yacht which is automatically taken over.
It will be appreciated that based on the status information exhibited by the driver, it may be further analyzed whether the yacht is in a driver takeover state. For example, if the driver status information indicates that the driver is in a tired or inattentive state, then the yacht may be considered not to be in a driver normal state, i.e., yacht takeover detection information may be obtained to control the yacht. After the state information of the driver is acquired, the state information of the driver is detected, so that whether the driver has the condition and the capability of taking over the yacht or not can be timely found, and the situation that the driver cannot take over the yacht in time in an emergency situation is avoided, for example, some take-over detection indexes and thresholds can be set. For example, it may be set that the driver's concentration must reach a certain level to take over the yacht; or the driver is set to be tired beyond a certain threshold, otherwise it will not be allowed to take over the yacht. By comparing these indices and thresholds with the status information of the driver, it is possible to accurately determine whether the driver has a condition to take over the yacht. Or when the driving state of the driver is determined to be an abnormal driving state (for example, the normal driving habit is that the operation panel is mastered by both hands, and the driving habit of the driver becomes that the operation panel is operated by one hand, so that the possibly abnormal state is proved), namely, the driver cannot continue to drive the yacht, the yacht automatically takes over according to a preset route or task. The detection task of the automatically taken over yacht may utilize advanced navigation techniques, sensors and algorithms to ensure safe and stable operation of the yacht. Meanwhile, the running state of the yacht can be monitored in real time, and proper control measures can be taken when necessary to cope with emergency or ensure the safety of the yacht. It is emphasized that before the yacht take-over control is performed, safety checks are also required to ensure that the yacht can be safely and smoothly transitioned to the take-over state, and these checking and confirming operations may include checking whether various devices of the yacht are operating properly, confirming whether the state of the take-over state is good, etc.
The yacht take-over detection information provides clear instructions and basis for the yacht, and the yacht can automatically take over the control right of the yacht according to the take-over detection information under the condition that a driver cannot continue driving, and safely drive according to a preset route or task. The potential risk caused by the factors of the driver is greatly reduced, the overall safety of the yacht is improved, and whether the driving ability of the driver is affected, such as fatigue, distraction or physical discomfort, can be timely found out by monitoring the state information of the driver in real time. Such status information is critical to determine if the driver is appropriate to continue driving the yacht. Once the abnormal driving state of the driver is detected, the driver reacts rapidly, a take-over procedure is started, and safe operation of the yacht is ensured, so that not only can the driving comfort of the driver be improved, but also the service life and the maintenance period of the yacht can be prolonged, and corresponding measures can be taken in advance to prevent potential risks. Meanwhile, an automatic driving algorithm and strategy can be continuously optimized, and the automatic driving level and adaptability of the yacht are improved.
In one possible implementation, referring to fig. 4, S300, obtaining yacht takeover detection information according to the status information includes:
S310, performing header feature recognition processing based on the state information to obtain header information; wherein the header information includes information reflecting characteristics exhibited by the driver's face while driving the yacht.
Illustratively, facial features are extracted from the preprocessed video data using computer vision techniques through a header feature recognition process, facial locations are detected using algorithms (e.g., haar cascade classifier, hog+svm, deep learning model such as MTCNN), facial keypoints are extracted using Dlib, openCV or deep learning model (e.g., by Facenet, MTCNN), typically 68 keypoints are extracted, e.g., including eyes, eyebrows, nose, mouth, contours, etc., and three-dimensional poses (pitch angle, yaw angle, roll angle) of the header are calculated from the facial keypoints. The facial features related to the driving state can be, for example, eye opening and closing states, blink frequency, opening and closing degree of mouth, or detecting eye closing states (such as EAR-eye aspect ratio), detecting opening and closing degree of mouth (such as MAR-mouth aspect ratio), calculating blink frequency and blink duration, and identifying yawning behavior, so as to obtain header information.
S320, performing upper body recognition processing on the driver based on the state information to obtain upper body information; wherein the upper body information includes information reflecting characteristics exhibited by the upper limbs of the driver while driving the yacht.
It can be understood that the upper body recognition processing of the driver is performed based on the state information, mainly for extracting the condition features represented by the upper body of the driver, in particular, the feature information represented by the upper limbs. This information is critical in assessing the driving status of the driver and whether it is appropriate to continue driving the yacht. For example, the upper body region of the driver can be identified and extracted from the state information by using computer vision and image processing technology, and the upper body region can be realized by a background subtraction or deep learning method (such as a target detection model) so as to accurately locate the position and boundary of the upper body of the driver; next, the extracted upper body region is subjected to feature analysis. This may include detecting and analyzing the driver's posture, movements, and dynamic changes of the upper body. For example, the driver's arm position, posture and motion, as well as changes in these motions over time, may be detected. These characteristic information can reflect the physical state and behavior pattern of the driver while driving the yacht. In addition, deep learning or machine learning algorithms can be utilized to further extract and quantify the upper body characteristic information. For example, a model may be trained to recognize specific actions or gestures of the driver, such as lifting, lowering or turning of arms, which may be related to the driver's attention, fatigue or driving ability, and thus may be an important indicator for assessing the driver's state. The method can also be that the movement and posture change of the upper body of the driver are detected through state information, an accelerometer and a gyroscope, a pressure sensor is arranged on a seat and a steering wheel to detect the posture and the contact point of the driver, a skeleton of the upper body of the driver is detected by using a posture estimation model through skeleton detection, the change of key points of the skeleton is analyzed, specific actions (such as turning, stretching hands, adjusting the seat and the like) of the driver are identified, the posture angle (pitch angle, roll angle and yaw angle) of the upper body of the driver is calculated through the accelerometer and gyroscope data, the data of the pressure sensor of the seat and the steering wheel are analyzed, the sitting posture and the contact point change of the driver are obtained, and the most representative characteristics such as the key points of the skeleton, the posture angle and the pressure distribution are selected again. Finally, the extracted upper body information may be fused with other status information (e.g., header information, voice information, etc.) to obtain a more comprehensive driver status assessment result. By comprehensively considering information of different sources, the driving state of the driver can be accurately judged, and a decision whether to take over the yacht or not can be made according to the driving state.
S330, performing lower body recognition processing on the driver based on the state information to obtain lower body information; wherein the lower body information includes information reflecting characteristics exhibited by the lower limbs of the driver while driving the yacht.
Illustratively, first, advanced computer vision and image processing technology is needed to perform deep analysis on the lower body state of the driver, and based on the state information, the lower body region of the driver can be extracted through captured lower body data of the driver, which generally involves segmenting and recognizing images to accurately distinguish the lower body of the driver from other background information, and then, feature extraction and quantitative analysis are needed to be performed on the extracted lower body region, which may include, for example, the posture, the motion of the legs of the driver, and the change of the lower body region with time; for example, it is also possible to detect whether the driver frequently adjusts the sitting posture, whether there is abnormal shake or vibration of the legs, or whether the legs of the driver are in other positions not related to the driving operation, etc., which may be an indication of fatigue or distraction of the driver. In addition, the deep learning model is utilized to extract and classify the features of the lower body image at a higher level so as to judge the driving state of the driver more accurately, and the physical state and the behavior mode of the driver when driving the yacht can be more comprehensively known by comprehensively considering the lower body information, so that a more accurate decision basis is provided for yacht takeover. Meanwhile, the lower body information can be fused with other state information (such as header information, upper body information and the like) so as to improve the accuracy and reliability of overall state evaluation. Finally, based on these comprehensive information, a decision is made as to whether or not to take over the yacht to ensure safe operation of the yacht and comfortable experience for the passengers.
S340, performing comprehensive analysis processing based on the header information, the upper body information and the lower body information to obtain yacht take-over detection information.
It will be appreciated that after obtaining the header information, the upper body information and the lower body information, comprehensive analysis processing needs to be performed on these information to obtain yacht takeover detection information, so as to ensure that the driving state of the driver can be comprehensively estimated from multiple dimensions, and thus an accurate yacht takeover decision can be made.
Illustratively, first, the header information, upper body information, and lower body information are freed from noise and invalid data, enhancing key features. Time alignment of the header information, the upper body information, and the lower body information is ensured, and a time axis is unified. Carrying out normalization processing on various data to eliminate dimension differences; extracting head information features, such as facial features, extracting facial key points by using a computer vision technology, detecting an eye opening and closing state, a blink frequency and a fixation point by using eye movement features, and calculating the three-dimensional posture (pitch angle, yaw angle and roll angle) of the head according to the head posture; extracting upper body information features, detecting a skeleton, detecting an upper body skeleton by using models such as OpenPose or MEDIAPIPE, acquiring key point positions, performing action recognition, and performing gesture features for recognizing upper body actions such as steering wheel operation, hand stretching action and the like, and calculating an upper body gesture angle by using accelerometer and gyroscope data; the information features of the lower body are extracted, the pressure features can be used for analyzing the pressure data of the seat and the pedal, detecting the sitting posture and pedal operation of the driver, performing action recognition, mainly recognizing leg actions, and after the features are picked up, carrying out deep analysis on the header information, wherein the header information mainly relates to the facial features and the head postures of the driver, such as the opening and closing states of eyes, blink frequency, opening and closing degree of the mouth, the three-dimensional posture of the head and the like. Such information can directly reflect the driver's level of attention and fatigue. If the driver blinks frequently, yaws, or head gestures are abnormal, this may mean that they are feeling tired or not concentrating, at which time the need for the yacht to take over may increase, further, analysis is required in combination with upper body information, which mainly focuses on the movements and gestures of the upper limbs of the driver, such as the positions, gestures, and movement changes of the arms, etc., which can reveal the physical state and behavior pattern of the driver while driving the yacht. For example, if the driver frequently adjusts his sitting posture or the arm motion is abnormal, this may mean that their driving state is unstable, requiring more attention. Finally, the lower body information, which is mainly the movements and postures of the lower limbs of the driver, such as the postures of the legs, the movement changes, and whether to frequently adjust the sitting posture, needs to be comprehensively considered, and can also provide important clues about the driving state of the driver. If the driver's lower limb movements are not related to driving operations or there are anomalies, this may also be a tiring or inattentive manifestation.
The yacht take-over detection information is obtained on the basis of the header information, the upper body information and the lower body information, and the yacht take-over detection information comprehensively considers driving state characteristics of multiple dimensions so as to ensure that whether a driver is suitable for continuously driving the yacht can be accurately judged. If the comprehensive analysis result shows that the driving state of the driver has risks, such as fatigue, inattention and the like, an alarm can be sent out or other necessary measures can be taken to ensure the safe operation of the yacht and the safety of passengers, the rich information provided by the head information, the upper body information and the lower body information can be fully utilized, and the accuracy and the reliability of yacht take over detection are improved. This helps to reduce the potential risk of driver fatigue or distraction, ensuring safe operation of the yacht and comfortable experience for the passengers.
In one possible implementation, referring to fig. 5, S340, performing comprehensive analysis processing based on the header information, the upper body information and the lower body information to obtain yacht takeover detection information includes:
S341, analyzing, processing and classifying based on the header information, the upper body information and the lower body information to obtain header classification information, upper body classification information and lower body classification information; the head classification information is used for reflecting the quality of the features displayed on the face of the driver when the yacht is driven, the upper body classification information is used for reflecting the quality of the features displayed on the upper limbs of the driver when the yacht is driven, and the lower body classification information is used for reflecting the quality of the features displayed on the lower limbs of the driver when the yacht is driven.
Illustratively, after the header information, the upper body information and the lower body information are acquired, feature extraction of each part is performed first, all features in the features of the header information are extracted, the key points of the face are extracted using computer vision technology, the eye opening and closing state, blink frequency, gaze point, degree of opening and closing of the mouth, head posture are detected by using the eye movement features, three-dimensional posture (pitch angle, yaw angle, roll angle) of the head is calculated according to the head posture, micro expression, and color condition of the face (for example, red, white or yellow may be used, each color represents different conditions of the driver) etc., but not limited thereto. The extracted features are classified into, for example, normal state features (the case where the features exhibited by the driver are good) and abnormal state features (the case where the features exhibited by the driver are bad); illustratively, for example, the eye opening degree, which may be expressed in terms of EAR values, is between 0.2 and 0.3 for normal eyes, i.e., EAR values, according to studies and experiments. Specific values may vary from individual to individual, typically indicating that the eyes are fully open between 0.25 and 0.30; 0.20-0.25: indicating that the eye portion is open, <0.20: it may be indicated that the eyes are close to or fully closed, so that the eyes are in a normal state at 0.23 to 0.30, and in an abnormal state less than 0.20; by such classification, header classification information can be obtained, which intuitively reflects the condition of the driver's facial features. Similarly, similar feature extraction and classification processing is performed for the upper body information and the lower body information. For the upper body information, the key point positions of the skeleton, the arm postures and the action change characteristics are extracted, and whether the heartbeat of the driver accords with the normal frequency is detected. For the lower body information, characteristics such as leg posture, motion change, and whether sitting posture is frequently adjusted are analyzed, and whether these characteristics indicate that the driving state of the driver is stable is determined. Through the comprehensive analysis processing, the head classification information, the upper body classification information and the lower body classification information are obtained, and the head classification information, the upper body classification information and the lower body classification information respectively reflect the characteristic good and bad conditions of different parts of a driver when the driver drives the yacht. The information provides an important basis for subsequent yacht take-over decisions. The quality evaluation criteria of all the features are similar, and it should be noted that some features may have small differences, but the judgment principle is similar, and redundant description is omitted.
S342, carrying out classification analysis processing on the head classification information, the upper body classification information and the lower body classification information to obtain yacht take-over detection information.
Illustratively, after obtaining the header classification information, the upper body classification information, and the lower body classification information, further data processing of the header classification information, the upper body classification information, and the lower body classification information is required to generate yacht takeover detection information, through comprehensive analysis of the classification information, weight distribution, and possibly machine learning algorithm application. Firstly, inputting header classification information, upper body classification information and lower body classification information into a comprehensive analysis module (the analysis module processes the processed information by continuously according to the input header classification information, upper body classification information and lower body classification information, and then carries out fusion analysis on the processed information, and then continuously optimizes the information process, continuously iterates the information, and further ensures the accuracy of the result), carrying out fusion processing on the classification information through the module, and allocating different weights for the classification information of different parts in consideration of the possible different influence degree of the information of different parts on the driving state. For example, facial features such as the open-close state of the eyes and blink frequency may more directly reflect the driver's level of attention and fatigue, and thus may be given higher weight. While the gesture and motion changes of the upper body and the lower body can provide important clues about driving states, the influence of the gesture and motion changes of the upper body and the lower body can be relatively small, so that lower weight can be given, the weight can be specially determined according to special situations and is not a fixed preset value, the flexibility is higher, the result can be more accurate, furthermore, the comprehensive analysis module can comprehensively score the weighted classified information, the score comprehensively considers the information of each part to form a comprehensive driving state evaluation, and the scoring algorithm can be designed according to actual needs, for example, a weighted summation, a weighted average or a prediction method based on a machine learning model can be adopted. It is noted that different drivers have individual differences and different driving habits. Finally, based on the result of the composite score, yacht take over detection information may be obtained, which means that the driving status of the driver may be at risk, such as fatigue or inattention, if the composite score exceeds a set threshold. At this point, to ensure safe operation of the yacht and safety of passengers by issuing an alarm or taking other necessary measures, such as automatically taking over control of the yacht.
In one possible implementation, referring to fig. 6, S342, performing a classification analysis process on the header classification information, the upper body classification information, and the lower body classification information to obtain yacht takeover detection information, including:
S3421, performing classification analysis processing on the header classification information, the upper body classification information and the lower body classification information to obtain header range information, upper body range information and lower body range information; the head range information is used for indicating the condition of the range occupied by the display state of the face of the driver, the upper body range information is used for indicating the condition of the range occupied by the display state of the upper limbs of the driver, and the lower body range information is used for indicating the condition of the range occupied by the display state of the lower limbs of the driver.
Illustratively, first, the range and variation of facial features is analyzed for header classification information, and using computer vision techniques, facial keypoints, e.g., eyes, mouth, nose, etc., can be precisely located and geometric models of the face constructed based on these keypoints. Judging the face display state, such as the opening and closing degree EAR value of 0.23-0.30 (belonging to the normal state or the good driving state), by analyzing the position, shape and dynamic change of the key points, and obtaining the head range information according to the occupation range of the good characteristics displayed by the eyes, and the characteristic occupation ratio range of the good characteristics displayed by the mouth, nose, forehead and the like; further, the upper body classification information is provided with skeleton key point positions, arm postures and action change characteristics, and the key points are detected and analyzed to identify the postures and actions of the upper body of the driver, and the range occupation ratio of good characteristics displayed by the upper body is determined. For example, it is possible to determine whether the driver is in a relaxed state (good presentation feature) or a strained state (bad presentation feature) based on the position and posture of the arm, and to derive the range ratio of the features presented in the upper body in the relaxed state accordingly to construct the upper body range information. Finally, for the lower body classification information, mainly the characteristics of leg posture, motion change, and whether to frequently adjust sitting postures, etc., by analyzing these characteristics, the posture and the range of motion of the lower body of the driver can be determined. For example, if the driver frequently adjusts the sitting posture or the leg posture, this may mean that the lower body feels uncomfortable or tired to construct lower body range information, accuracy and robustness of the range ratio are enhanced by using a machine learning algorithm or a pattern recognition technique in extracting the header range information, the upper body range information, and the lower body range information, states of different parts can be automatically recognized and classified according to history data and a training model, and more accurate range information is provided.
S3422, yacht take-over detection information is obtained according to the header range information, the upper body range information and the lower body range information.
First, the header range information, the upper body range information and the lower body range information are taken as input data, the input data is sent into a pre-trained machine learning model (such as a neural network model or a decision tree model), the model is trained through a large amount of labeling data, the driving situation of a driver is deduced according to the range information of each part, and inside the model, feature extraction and fusion are carried out on the header range information, the upper body range information and the lower body range information, wherein the feature extraction is to extract key information capable of representing the driving state of the driver from the original data through a specific algorithm or network structure. The feature fusion is to combine and integrate the key information to form a unified feature vector for classification or regression task, the model can further process and analyze the fused feature vector, which may mainly include applying some classification algorithms (such as support vector machine, random forest, etc.) or regression algorithms (such as linear regression, neural network regression, etc.), judging the driving state of the driver according to a preset threshold or judgment rule, and finally obtaining yacht takeover detection information according to the output result of the model, where the information may include a specific score or grade to indicate whether the current driving state of the driver is good or not and whether further measures need to be taken.
S400, detecting the periphery of the yacht after processing based on yacht connection pipe detection information to obtain yacht detection data; the yacht detection data are used for reflecting specific conditions in an area extending from the outer contour of the yacht to a direction away from the outer contour of the yacht by a preset distance.
It will be appreciated that, first, the components provided on the yacht are all working, and may for example include cameras, radars, sonars etc. which are capable of capturing images, distance, speed etc. of the surrounding yacht. The data collected by these sensors is raw and raw, and thus requires further data processing and analysis, and further, the data processing algorithms may process the raw data, possibly including image recognition, object tracking, distance calculation, etc. For example, the image captured by the camera can identify surrounding objects such as ships and obstacles through an image identification algorithm, and extract characteristic information of the objects; the radar and sonar data can be used for calculating the distance and speed of the target through a specific algorithm, and after the data processing is completed, the yacht can analyze and judge the data. This process may involve complex decision algorithms for determining whether surrounding vessels or obstacles pose a threat to the yacht and whether avoidance or deceleration is required, and finally, yacht control outputs yacht detection data based on the analysis. The data may include information such as the position, speed, direction and the like of surrounding ships or obstacles, and action suggestions which should be taken by the yacht itself, and the data can be displayed to a driver through a display screen, can be directly used for automatic take-over processing of the yacht, can also be displayed to the driver through the display screen while the yacht is automatically taken over processing, and can timely realize automatic avoidance and automatic course adjustment.
In one possible implementation, referring to fig. 7, S400, after processing based on yacht connection detection information, detecting the periphery of a yacht to obtain yacht detection data, including:
S410, detecting the periphery of the yacht after processing yacht take-over detection information to obtain first detection information and second detection information; wherein the first detection information includes information for reflecting the kind of obstacle or the type of obstacle in the detection area, and the second detection information includes information for reflecting the distance between the obstacle and the outline of the yacht.
Illustratively, first, yacht takeover detection information typically contains data regarding driver status, current yacht status, and possible risk, which need to be parsed and evaluated by specific algorithms or models to extract information directly related to yacht ambient detection. Based on the yacht take over detection information, the current state of the driver, for example, whether it is in a state of fatigue, distraction or concentration, is analyzed. Such information helps to assess the driver's ability to perceive and respond to the surrounding environment of the yacht. The current state of the yacht is extracted from yacht take-over detection information, wherein the current state mainly comprises speed, direction, sailing mode and the like. And carrying out risk assessment by combining the state of the driver and the state of the yacht, and determining the type and the level of the risk possibly existing in the current environment. Data of the surrounding environment, including information of images, positions, speeds and the like of obstacles, are collected in real time according to various sensors (such as a radar, a camera, a sonar and the like) mounted on the yacht. The data collected by the sensor is fused with the risk assessment results, which involves aligning, correcting and integrating the information of multiple data sources to ensure the accuracy and consistency of the information to obtain the first detection information and the second detection information. The first detection information is the type or kind of the obstacle, and the image data collected by the sensor is analyzed through an image recognition and classification algorithm to recognize the type or kind of the obstacle. This may be different types of obstacles for ships, buoys, reefs, other yachts, etc. The second detection information is the distance between the obstacle and the outline of the yacht, radar or sonar data can be utilized, and the distance between the obstacle and the outline of the yacht can be accurately measured. The data are processed through a specific algorithm and converted into understandable distance information so as to obtain first detection information and second detection information according to the yacht takeover detection information and real-time data of the surrounding environment of the yacht. The information provides important input for the subsequent yacht automatic take-over processing, and ensures that the yacht can safely and intelligently cope with the change of the surrounding environment.
S420, carrying out combination processing based on the first detection information and the second detection information to obtain total detection information; wherein the total detection information is used to reflect a specific kind and a specific position of the obstacle.
It will be appreciated that in combining the first detection information and the second detection information to obtain the total detection information, the type of obstacle (i.e. the first detection information) is mainly combined with its specific location around the yacht (i.e. the second detection information) to form a complete and understandable information of the specific situation of the obstacle.
Illustratively, first, the first detection information is parsed, and what type of object can be detected as an obstacle by an information recognition algorithm. For example, it may be a buoy, a cargo ship, a small yacht or a reef, further each type of obstacle may have different risk levels and coping strategies. Second, taking into account the second detection information mainly provides a precise distance between the obstacle and the outer contour of the yacht, which is obtained by radar, sonar or other ranging techniques. The information not only can reflect how far the obstacle is, but also can predict when the obstacle possibly enters the route or the safety range of the yacht by combining the current speed and the direction of the yacht, and when the two information are combined, a data fusion technology is needed to ensure the consistency of the two information in time and space. For example, time stamps may be used to synchronize the data to ensure that they reflect the same time of day environmental conditions. In addition, some spatial calibration may be required to ensure that the position of the obstacle in the image matches its position in the radar or sonar data. Finally, by combining these pieces of information together, total detection information can be generated. The total detection information contains not only the kind of obstacle but also its specific position, distance and possibly risk assessment with respect to the yacht. The information can be displayed to the driver in a graphical or textual manner to better understand the condition of the surrounding environment, and can also be used as input to the yacht auto-takeover system for automatic avoidance, adjustment of heading, or other safety measures.
S430, obtaining yacht detection data according to the total detection information.
For example, the total detection information may be analyzed and parsed in depth. This involves a detailed assessment of the type, location, distance and possibly risk level of the obstacle, from which it can be determined by analysis which obstacles may pose a threat to the yacht, and how much threat is, based on the results of the analysis, generating yacht detection data. The data includes key information about the specific location, speed, direction, etc. of surrounding obstacles that facilitate the automatic take-over of the yacht to learn about the dynamic changes of the current environment. Meanwhile, corresponding action suggestions or early warning information is generated according to the type and the risk level of the obstacle. In the process of generating the yacht detection data, the data such as speed, direction, sailing mode and the like are combined with the detection information of the surrounding environment according to the state of the yacht so as to form more comprehensive yacht detection data. Finally, the yacht detection data is presented to the driver or automatically taken over, for example, by a display screen, voice prompts or other interactive means, etc. The driver can know the condition of the surrounding environment according to the data and make corresponding decisions or operations to ensure the safe navigation of the yacht.
S500, reminding a driver of taking corresponding measures based on yacht detection data.
For example, yacht detection data may be presented and communicated in a number of ways to accommodate the habits and preferences of different drivers. This may include displaying the location, distance and type of surrounding obstacles in real time on the yacht's display screen, as well as alerting the driver to potential risks by audible prompts or vibration feedback. Secondly, the risk level can be automatically estimated according to the specific content of the yacht detection data, and corresponding advice or warning is provided for the driver. For example, if a large boat is detected approaching in front of the boat, and collision with the yacht is likely, the driver may be alerted to take emergency avoidance measures by highlighting an obstacle, sounding an alarm, or vibrating the like. In addition, yacht detection data can also be combined with automatic takeover of the yacht, so that higher-level safety measures are realized. When the potential risk is detected and the driver is considered to fail to respond in time, the automatic take-over can take over control right, automatically adjust the speed and direction of the yacht or take other avoidance measures so as to ensure the safety of the yacht.
Finally, in order to continuously improve performance and accuracy, the yacht can also collect feedback and operation data of drivers for optimizing algorithms and models, and through continuous learning and improvement, more accurate and timely reminding and advice can be provided.
By the arrangement, the state information of the driver is obtained by acquiring the azimuth information of the driver and analyzing and processing the azimuth information, and then the yacht take-over detection information is obtained. Based on the yacht take-over detection information, detecting the periphery of the yacht, acquiring yacht detection data, and reminding a driver to take corresponding measures according to the detection data. The method can monitor the state of a driver and the conditions of the surrounding environment of the yacht in real time, discover potential risks in time, improve the running safety of the yacht, improve the running safety and reliability through effective functions of data display, risk assessment, automatic takeover and the like, and prompt in time to help to avoid the situation that damage is possibly caused to the yacht, thereby reducing maintenance cost and property loss, ensuring the operation safety of the yacht, protecting passengers from injury, reducing the accident rate of the yacht, greatly reducing the influence on the personal safety of the tourists in the yacht when the yacht is collided, and preventing the components of the yacht from failure.
In one possible implementation, referring to fig. 8, S500, based on yacht detection data, reminding a driver to make corresponding measures includes:
S510, acquiring real-time information of the yacht; the real-time information comprises information for reflecting the moving speed, moving direction and moving track of the yacht during running;
For example, various sensors and data processing techniques may be employed, and speed sensors may be provided on the yacht, which are capable of measuring the speed of movement of the yacht in real time, and speed data may also be obtained through a control system or data interface of the yacht to learn the current speed of travel of the yacht. And secondly, the course and the moving direction of the yacht can be monitored through a direction sensor such as a gyroscope or a compass, and the course information of the yacht can be provided in real time so as to know the current advancing direction of the yacht. In addition, real-time position information of the yacht can be obtained through GPS and other positioning technologies, and the movement track of the yacht is analyzed by combining continuous position data. By comprehensively utilizing the sensors and the data processing technology, real-time information of the yacht, including the moving speed, the moving direction, the moving track and the like, can be obtained, and accurate data support is provided for subsequent decision and reminding. The real-time information is used for predicting when an obstacle possibly enters the route or the safety range of the yacht in combination with the current speed and the direction of the yacht, and can also be used for predicting whether the obstacle is far away from the yacht according to the current speed and the direction of the yacht.
S520, processing and analyzing based on the real-time information and yacht detection data to obtain real-time influence information; the real-time influence information is used for reflecting the influence condition between the obstacle and the yacht;
It will be appreciated that in processing and analyzing the real-time information and the yacht detection data, various factors need to be comprehensively considered to obtain real-time influence information accurately reflecting the influence condition between the obstacle and the yacht, the real-time information of the yacht is compared and correlated with the yacht detection data, the yacht detection data provides detailed information of surrounding obstacles, such as position, speed, type, etc., and the real-time information reflects the current moving speed, direction and track of the yacht. By comparing the two sets of data, the relative position and speed relationship between the obstacle and the yacht can be judged, so that the mutual influence relationship between the obstacle and the yacht can be primarily known. Secondly, real-time information and yacht detection data are deeply analyzed and processed by using advanced algorithms and models, the algorithms and the models are based on physical principles, kinematic rules and a large amount of historical data, the motion states of the yacht and the obstacle are predicted and simulated, and potential influences of the obstacle on the yacht can be accurately estimated, including possible collision risks, influence degrees, influence time and the like. In addition, there are special factors, such as the influence of special environmental factors (such as wind, wave, water flow, etc.) on the movement of the yacht and the obstacle, and the performance characteristics (such as steering performance, stability, etc.) of the yacht, which may have an important influence on the real-time influence information, and finally, the real-time influence information can be obtained by combining the analysis and the processing of the above factors. Such information will reflect in detail the interplay between the obstacle and the yacht, including the size of the potential collision risk, the range of possible impacts, the duration of the impact, etc., to ensure safe navigation of the yacht.
And S530, obtaining movement improvement data according to the real-time influence information. The movement improvement data includes data reflecting adjustments and improvements to yacht movement direction, speed and trajectory.
It will be appreciated that after the real-time impact information is obtained, further, movement improvement data is obtained based on the real-time impact information. The movement improvement data is calculated based on the real-time impact information, and aims to provide guidance for a driver to improve the movement state of the yacht so as to avoid potential risks or optimize the navigation path. Potential collision risks in the real-time impact information are analyzed first. If the situation that the obstacle crosses or approaches the locus of the yacht is detected, and the collision risk is high, calculating according to the actual movement data, and obtaining the movement parameters of the yacht, such as speed, direction or route, which need to be adjusted, so as to avoid the obstacle. These parameter adjustments may be determined based on physical principles, kinematics laws, and handling properties of the yacht to ensure that the yacht is safely kept away from potential risks. And secondly, the influence of other factors in the real-time influence information, such as the influence of environmental factors, such as stormy waves, water currents and the like, on the movement of the yacht is considered. And carrying out correlation combination according to the factors to obtain corresponding movement improvement data of the yacht so as to help drivers to better cope with environmental factors and keep the stability and safety of the yacht. The movement improvement data may be used to display the adjusted route, speed or direction advice on a display screen, or to communicate corresponding instructions to the driver by means of audible prompts or vibration feedback, etc. The driver can adjust the moving state of the yacht according to the moving improvement data, so that potential risks are avoided, the sailing path is optimized, and the safety and stability of the yacht are ensured. Through the mode of acquiring movement improvement data based on real-time influence information, a driver can more accurately know the relation between the yacht and surrounding obstacles, and timely take corresponding measures to ensure safe navigation of the yacht. Not only can the safety of yacht traveling be improved, but also potential accident risks can be reduced, and more comfortable and safe traveling experience is provided for yacht passengers.
S540, reminding a driver of making corresponding measures according to the movement improvement data.
It can be understood that after the movement improvement data are obtained, corresponding reminding information is generated according to the data so as to inform a driver to take corresponding measures in time. The reminding information can be in various forms such as characters, images, sound or vibration so as to adapt to different driving environments and driver demands. And determining key information needed to remind a driver according to the specific content in the mobile improvement data. For example, if the movement improvement data indicates that the speed of the yacht needs to be adjusted, a recommendation or instruction regarding the speed adjustment may be generated. If the movement improvement data relates to changing the heading or course of the yacht, a corresponding heading or course adjustment recommendation is provided. Secondly, the reminder information is presented to the driver in a suitable manner. If the yacht is equipped with a display screen, a reminder message in the form of text or images may be displayed on the display screen, which may be, for example, "beg to surrender low speed" or "adjust heading to XX degrees". Meanwhile, the method can ensure that a driver can timely receive the prompt under various conditions through modes such as voice prompt, mobile phone notification, voice broadcasting, watch vibration prompt or vibration feedback and the like. In addition, the content and the mode of the reminding information can be dynamically adjusted according to the response of the driver and the real-time state of the yacht. For example, if the driver fails to respond to the reminding information in time, the reminding strength is increased, for example, the volume or frequency of the voice prompt is increased, so that the driver can notice and take corresponding measures.
By means of the arrangement, the driver is reminded of making corresponding measures according to the movement improvement data, so that the driver can be helped to better cope with various challenges and risks in the sailing process, and safety and stability of the yacht are guaranteed. The safety of yacht navigation can be improved, the driving experience of a driver and the comfort level of passengers can be improved, potential risks can be found timely, the safety of yacht navigation is improved, the failure of components of the yacht is avoided as far as possible through effective data display, risk assessment, automatic takeover and the like, the situation that the yacht cannot normally travel is caused, and the safety and the reliability of navigation are further improved.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present application.
Corresponding to the yacht control method described in the above embodiments, the embodiment of the present application further provides a yacht control system, where each module of the system may implement each step of the yacht control method. Fig. 9 shows a block diagram of a yacht control system according to an embodiment of the present application, and for convenience of explanation, only parts related to the embodiment of the present application are shown.
Referring to fig. 9, the yacht control system comprises:
An acquisition unit configured to acquire azimuth information of a driver; the azimuth information is used for indicating the specific position of the driver in the cockpit;
the analysis unit is used for carrying out analysis processing based on the azimuth information to obtain the state information of the driver; wherein the status information includes information for reflecting a specific status of the driver when driving the yacht;
The first obtaining unit is used for obtaining yacht take-over detection information according to the state information; the yacht take-over detection information comprises information of a detection task of the yacht which is automatically taken over;
The second obtaining unit is used for detecting the periphery of the yacht after processing based on yacht take-over detection information to obtain yacht detection data; the yacht detection data are used for reflecting specific conditions in an area extending from the outer contour of the yacht to a direction away from the outer contour of the yacht by a preset distance;
And the reminding unit is used for reminding a driver of making corresponding measures based on the yacht detection data.
It should be noted that, because the content of information interaction and execution process between the above systems/units is based on the same concept as the method embodiment of the present application, specific functions and technical effects thereof may be referred to in the method embodiment section, and will not be described herein.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the system is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, the specific names of the functional units and modules are only for distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
The embodiment of the application also provides a yacht, and fig. 10 is a schematic structural diagram of a yacht control device according to an embodiment of the application. As shown in fig. 10, the control device 6 of this embodiment includes: at least one processor 60 (only one is shown in fig. 10), at least one memory 61 (only one is shown in fig. 10), and a computer program 62 stored in the at least one memory 61 and executable on the at least one processor 60, which processor 60, when executing the computer program 62, causes the control device 6 to perform the steps of any of the various yacht control method embodiments described above, or causes the control device 6 to perform the functions of the various modules/units of the various system embodiments described above.
Illustratively, the computer program 62 may be partitioned into one or more modules/units that are stored in the memory 61 and executed by the processor 60 to complete the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing the specified functions, which instruction segments are used to describe the execution of the computer program 62 in the electronic device 6.
The control device 6 may be a computing device such as a desktop computer, a notebook computer, a palm computer, etc. The control device 6 may include, but is not limited to, a processor 60, a memory 61. It will be appreciated by those skilled in the art that fig. 6 is merely an example of the control device 6 and is not limiting of the control device 6, and may include more or fewer components than shown, or may combine certain components, or different components, such as may also include input-output devices, network access devices, buses, etc.
The Processor 60 may be a central processing unit (Central Processing Unit, CPU), the Processor 60 may also be other general purpose processors, digital signal processors (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), off-the-shelf Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 61 may in some embodiments be an internal storage unit of the control device 6, such as a hard disk or a memory of the control device 6. The memory 61 may also be an external storage device of the control apparatus 6 in other embodiments, such as a plug-in hard disk, a smart memory card (SMART MEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD) or the like, which are provided on the control apparatus 6. Further, the memory 61 may also include both an internal storage unit and an external storage device of the control apparatus 6. The memory 61 is used for storing an operating system, application programs, boot loader (BootLoader), data, other programs, etc., such as program codes of the computer program. The memory 61 may also be used for temporarily storing data that has been output or is to be output.
Embodiments of the present application also provide a computer readable storage medium storing a computer program which, when executed by a processor, performs the steps of any of the various method embodiments described above.
Embodiments of the present application provide a computer program product for causing a control apparatus to carry out the steps of any of the respective method embodiments described above when the computer program product is run on the control apparatus.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the method of the above embodiments, and may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to control means, recording medium, computer Memory, read-Only Memory (ROM), random access Memory (RAM, random Access Memory), electrical carrier signals, telecommunications signals, and software distribution media. Such as a U-disk, removable hard disk, magnetic or optical disk, etc. In some jurisdictions, computer readable media may not be electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided herein, it should be understood that the disclosed yacht control system, apparatus and method may be implemented in other ways. For example, the above described embodiments of the yacht control system are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions of actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
The above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.
Claims (10)
1. A yacht control method comprising:
Acquiring azimuth information of a driver; wherein the azimuth information is used for indicating a specific position of the driver in the cockpit;
analyzing and processing based on the azimuth information to obtain the state information of the driver; wherein the status information includes information for reflecting a specific status of the driver when driving the yacht;
Obtaining yacht take-over detection information according to the state information; wherein the yacht take-over detection information comprises information of a detection task of the yacht which is automatically taken over;
Detecting the periphery of the yacht after processing based on the yacht connection pipe detection information to obtain yacht detection data; the yacht detection data are used for reflecting specific conditions in an area extending from the outer contour of the yacht to a direction away from the outer contour of the yacht by a preset distance;
Based on the yacht detection data, reminding a driver of making corresponding measures.
2. The yacht control method as claimed in claim 1, wherein before the obtaining of the azimuth information of the driver, the method further comprises:
Acquiring an intra-cabin image of the cockpit; wherein the intra-cabin image is used for reflecting specific conditions in the cockpit;
Detecting and analyzing based on the images in the cockpit to obtain the existence condition of personnel in the cockpit; wherein the personnel presence condition is used to indicate whether the driver is present in the cockpit;
Obtaining yacht takeover information under the condition that the existence condition of the personnel indicates that the driver does not exist in the cockpit; the yacht take-over information comprises information of a detection task of the yacht and a running task of the yacht which are automatically taken over;
And controlling the yacht based on the yacht takeover information and reminding a driver to take corresponding measures.
3. The yacht control method as claimed in claim 1, wherein the analyzing based on the azimuth information to obtain the status information of the driver comprises:
determining and analyzing based on the azimuth information to obtain the face orientation information of the driver; wherein the face orientation information is used for reflecting a specific direction in which the face of the driver is oriented;
Obtaining front and rear information of the driver according to the face orientation information; wherein the front-rear information includes information of the front and rear sides of the body of the driver;
obtaining body information of the driver according to the front and rear information of the driver; wherein the physical information is used to indicate a specific condition of the whole body of the driver;
and carrying out identification processing on the body information to obtain the state information of the driver.
4. The yacht control method as claimed in claim 1, wherein the obtaining yacht take-over detection information based on the status information comprises:
Performing header feature recognition processing based on the state information to obtain header information; wherein the header information includes information reflecting characteristics exhibited by the driver's face while driving the yacht;
performing upper body recognition processing of the driver based on the state information to obtain upper body information; wherein the upper body information includes information reflecting characteristics exhibited by the upper limbs of the driver while driving the yacht;
Performing lower body recognition processing of the driver based on the state information to obtain lower body information; the lower body information comprises information used for reflecting characteristics exhibited by lower limbs of the driver when driving the yacht;
And carrying out comprehensive analysis processing based on the header information, the upper body information and the lower body information to obtain the yacht take-over detection information.
5. The yacht control method as claimed in claim 4, wherein the performing the comprehensive analysis process based on the header information, the upper body information and the lower body information to obtain the yacht takeover detection information comprises:
Analyzing, processing and classifying based on the header information, the upper body information and the lower body information to obtain header classification information, upper body classification information and lower body classification information; the head classification information is used for reflecting the characteristics of the face of the driver when the yacht is driven, the upper body classification information is used for reflecting the characteristics of the upper limbs of the driver when the yacht is driven, and the lower body classification information is used for reflecting the characteristics of the lower limbs of the driver when the yacht is driven;
and carrying out classification analysis processing on the head classification information, the upper body classification information and the lower body classification information to obtain the yacht take-over detection information.
6. The yacht control method as claimed in claim 5, wherein the performing the classification analysis processing on the header classification information, the upper body classification information, and the lower body classification information to obtain the yacht takeover detection information comprises:
Performing classification analysis processing on the header classification information, the upper body classification information and the lower body classification information to obtain header range information, upper body range information and lower body range information; the head range information is used for indicating the condition of the range occupied by the display state of the face of the driver, the upper body range information is used for indicating the condition of the range occupied by the display state of the upper limbs of the driver, and the lower body range information is used for indicating the condition of the range occupied by the display state of the lower limbs of the driver;
And obtaining the yacht take-over detection information according to the header range information, the upper body range information and the lower body range information.
7. The yacht control method as claimed in claim 1, wherein the detecting the periphery of the yacht after the processing based on the yacht take-over detection information to obtain yacht detection data comprises:
detecting the periphery of the yacht after processing the yacht take-over detection information to obtain first detection information and second detection information; wherein the first detection information includes information for reflecting a kind of an obstacle or a type of the obstacle in a detection area, and the second detection information includes information for reflecting a distance between the obstacle and an outline of the yacht;
Performing combination processing based on the first detection information and the second detection information to obtain total detection information; wherein the total detection information is used for reflecting specific types and specific positions of the obstacles;
And obtaining the yacht detection data according to the total detection information.
8. The yacht control method as claimed in claim 7, wherein the alerting the driver to take the corresponding action based on the yacht detection data comprises:
Acquiring real-time information of the yacht; wherein the real-time information includes information for reflecting a moving speed, a moving direction and a moving track when the yacht travels;
Processing and analyzing based on the real-time information and the yacht detection data to obtain real-time influence information; the real-time influence information is used for reflecting the influence condition between the obstacle and the yacht;
Obtaining movement improvement data according to the real-time influence information; wherein the movement improvement data includes data for reflecting adjustment and improvement of a movement direction, a movement speed, and a movement trajectory of the yacht;
and reminding a driver of taking corresponding measures according to the movement improvement data.
9. A yacht control system, comprising:
An acquisition unit configured to acquire azimuth information of a driver; wherein the azimuth information is used for indicating a specific position of the driver in the cockpit;
The analysis unit is used for carrying out analysis processing based on the azimuth information to obtain the state information of the driver; wherein the status information includes information for reflecting a specific status of the driver when driving the yacht;
The first obtaining unit is used for obtaining yacht take-over detection information according to the state information; wherein the yacht take-over detection information comprises information of a detection task of the yacht which is automatically taken over;
The second obtaining unit is used for detecting the periphery of the yacht after processing based on the yacht take-over detection information to obtain yacht detection data; the yacht detection data are used for reflecting specific conditions in an area extending from the outer contour of the yacht to a direction away from the outer contour of the yacht by a preset distance;
And the reminding unit is used for reminding a driver of making corresponding measures based on the yacht detection data.
10. A yacht comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the method of any of claims 1 to 8 when executing the computer program.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410737893.1A CN118457865A (en) | 2024-06-07 | 2024-06-07 | Yacht control method and system and yacht |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410737893.1A CN118457865A (en) | 2024-06-07 | 2024-06-07 | Yacht control method and system and yacht |
Publications (1)
Publication Number | Publication Date |
---|---|
CN118457865A true CN118457865A (en) | 2024-08-09 |
Family
ID=92163704
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410737893.1A Pending CN118457865A (en) | 2024-06-07 | 2024-06-07 | Yacht control method and system and yacht |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118457865A (en) |
-
2024
- 2024-06-07 CN CN202410737893.1A patent/CN118457865A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112277955B (en) | Driving assistance method, device, equipment and storage medium | |
Dong et al. | Driver inattention monitoring system for intelligent vehicles: A review | |
Craye et al. | Driver distraction detection and recognition using RGB-D sensor | |
CN101466305B (en) | Method for determining and analyzing a location of visual interest | |
Jiménez et al. | Gaze fixation system for the evaluation of driver distractions induced by IVIS | |
KR101276770B1 (en) | Advanced driver assistance system for safety driving using driver adaptive irregular behavior detection | |
KR20200063193A (en) | Driving management method and system, in-vehicle intelligent system, electronic device, medium | |
WO2019028798A1 (en) | Method and device for monitoring driving condition, and electronic device | |
CN105956548A (en) | Driver fatigue state detection method and device | |
Wu et al. | Reasoning-based framework for driving safety monitoring using driving event recognition | |
US11427208B2 (en) | Driver condition determination apparatus, method and computer program product | |
CN117227740B (en) | Multi-mode sensing system and method for intelligent driving vehicle | |
Guria et al. | Iot-enabled driver drowsiness detection using machine learning | |
Mishra | Driver drowsiness detection | |
CN113743279A (en) | Ship pilot state monitoring method, system, storage medium and equipment | |
CN118457865A (en) | Yacht control method and system and yacht | |
CN114529887A (en) | Driving behavior analysis method and device | |
WO2021262166A1 (en) | Operator evaluation and vehicle control based on eyewear data | |
Tarba et al. | The driver's attention level | |
Ujir et al. | Real-time driver’s monitoring mobile application through head pose, drowsiness and angry detection | |
Mahomed et al. | Driver Posture Recognition: A Review | |
Hwang et al. | Collision Risk Situation Awareness Evaluation Model Based on Bridge Operator Behavior Using Wearable Sensors | |
Shostak et al. | Using internet of things technologies to ensure cargo transportation safety | |
CN118269999B (en) | Multi-mode interaction new energy automobile control system | |
Huu et al. | Detecting Drivers Falling Asleep Algorithm Based on Eye and Head States |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |