Disclosure of Invention
The invention mainly aims to provide a mowing area dividing method, aiming at improving the area dividing efficiency of a mowing robot and customizing and dividing a mowing area according to different mowing requirements of a user so as to meet diversified mowing requirements.
To achieve the above object, the present invention provides a mowing area delimiting method including the steps of:
acquiring position characteristic parameters of the mowing robot;
acquiring a solid image of a lawn in an area where the mowing robot is located according to the position characteristic parameters;
displaying the solid image and acquiring user feedback information returned based on the solid image;
and determining a target area for mowing by the mowing robot according to the user feedback information.
Optionally, the position characteristic parameter includes satellite positioning information, and the step of obtaining a solid image of a lawn in an area where the mowing robot is located according to the position characteristic parameter includes:
and acquiring a satellite map corresponding to the satellite positioning information as a solid image of the lawn in the area where the mowing robot is located.
Optionally, the step of determining the target area for mowing by the mowing robot according to the user feedback information comprises:
determining an initial target area for mowing of the mowing robot according to the user feedback information to serve as a first area;
acquiring a first control instruction input by a user;
controlling the mowing robot to move according to the first control instruction, and acquiring a first movement track of the mowing robot;
generating a second area according to the first motion track;
comparing the regional characteristic parameters of the first region and the second region;
and when the area characteristic parameters meet preset conditions, taking the first area as the target area.
Optionally, the region feature parameters may specifically include a region position feature point, a region area, and a region shape, and the step of comparing the region feature parameters of the first region and the second region includes:
determining an area difference between the area of the first region and the area of the second region, determining a similarity between the shape of the first region and the shape of the second region, and determining a first distance between the position feature point of the first region and the position feature point of the second region;
when the region characteristic parameter meets a preset condition, the step of taking the first region as the target region comprises the following steps:
and when the area difference is smaller than or equal to a preset area difference threshold value, the similarity is larger than or equal to a preset similarity threshold value, and the first distance is smaller than or equal to a first preset distance threshold value, judging that the area characteristic parameters meet preset conditions, and taking the first area as the target area.
Optionally, the step of determining an initial target area for mowing by the mowing robot according to the user feedback information comprises:
extracting area boundary identification information in the user feedback information;
and generating the initial target area according to the area boundary identification information.
Optionally, the area boundary identification information includes mowing area boundary identification information and obstacle area boundary identification information, and the step of generating the initial target area according to the area boundary identification information includes:
generating a mowing area selected by a user according to the mowing area boundary identification information, and generating an obstacle marking area according to the obstacle area boundary identification information;
determining the initial target zone based on the user selected mowing zone and the obstacle marking zone.
Optionally, when the mowing area boundary identification information is a graph, the step of generating the mowing area selected by the user according to the mowing area boundary identification information includes:
judging whether the graph lines enclose to form a closed area;
if the drawing lines do not enclose to form a closed area, when the drawing lines are one line, determining a second distance between two end points of the drawing lines;
and when the second distance is smaller than or equal to a second preset distance threshold value, connecting the two end points by adopting a line segment, and taking an enclosed area formed by enclosing the line segment and the graph as the mowing area selected by the user.
Optionally, before the step of displaying the solid image and acquiring the user feedback information returned based on the solid image, the method further includes:
judging whether the image quality of the solid image meets a preset requirement or not;
if yes, executing the step of displaying the solid image and acquiring user feedback information returned based on the solid image;
if not, acquiring a second control instruction input by the user;
controlling the mowing robot to move according to the second control instruction, and acquiring a second movement track of the mowing robot;
and determining the target area according to the second motion track.
Optionally, after the step of displaying the solid image and acquiring the user feedback information returned based on the solid image, the method further includes:
acquiring a current track generation mode of the mowing robot;
when the current track generation mode of the mowing robot is in a first mode, extracting a user-specified path in the user feedback information;
determining a driving path of the mowing robot according to the path designated by the user;
when the current track generation mode of the mowing machine is in a second mode, the step of determining the target area mowed by the mowing robot according to the user feedback information is executed;
and generating a driving path of the mowing robot in the target area according to a preset rule.
Further, to achieve the above object, the present invention also provides a mowing robot comprising:
a control device, the control device comprising: a memory, a processor and a mowing area delimiting program stored on the memory and executable on the processor, the mowing area delimiting program when executed by the processor implementing the steps of the mowing area delimiting method according to any one of the above;
and the mowing device is used for performing mowing operation in the target area determined by the control device.
The invention provides a mowing area dividing method, which obtains a solid image of a lawn in an area where a mowing robot is located through position characteristic parameters of the mowing robot, displays the solid image, enables a user to specify a mowing area, a non-mowing area and the like in the lawn according to a mowing demand input instruction after seeing the solid image, obtains a user feedback information formed by the user based on the instruction input by the solid image, determines a mowing target area of the mowing robot according to the user feedback information, enables the determined mowing target area to meet the mowing demand of the user without manually laying a boundary in the process, improves the area dividing efficiency of the mowing robot, enables the user to customize and divide a virtual mowing area according to different mowing demands, and exerts the creative space of the user to the greatest extent so that the mowing robot can flexibly adapt to diversified mowing demands, and manufacturing lawn pictures with different patterns.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The main solution of the embodiment of the invention is as follows: acquiring position characteristic parameters of the mowing robot; acquiring a solid image of a lawn in an area where the mowing robot is located according to the position characteristic parameters; displaying the solid image and acquiring user feedback information returned based on the solid image; and determining a target area for mowing by the mowing robot according to the user feedback information.
Due to the fact that in the prior art, the mowing boundary of the mowing robot is defined in a mode of manually burying the boundary line, the mode is low in efficiency, high in failure rate and easy to damage, the failure point is difficult to maintain and find out, and the mowing robot is limited to be flexibly used according to different mowing requirements.
The invention provides a solution, in the process, the boundary does not need to be laid manually, the required boundary graph and the working line in the domain are made through a mobile phone APP or a computer table board and sent to the robot, so that the determined mowing target area can meet the mowing requirement of a user, the area defining efficiency of the mowing robot is improved, a plurality of hardware facilities (such as buried wires, position identification points and the like) are reduced, the probability of a plurality of faults is reduced, the trouble and trouble of initial installation of the user are saved, the user can customize and define a virtual mowing area according to different mowing requirements, and the creative space of the user is exerted to the maximum extent, so that the mowing robot can flexibly adapt to diversified mowing requirements.
The invention provides a mowing robot. The mowing robot is an automatic device which does not need manual operation and automatically mows and shears the grass in a target area in the lawn.
In an embodiment of the present invention, referring to fig. 1, the mowing robot may specifically include: the lawn mowing device 100, the control device 200, the positioning device 300, the image acquisition device 400, the driving device 500 and the like.
Wherein the grass cutting device 100 is connected to the control device 200 for cutting grass within a target area determined by the control device 200 when switched on.
The positioning device 300 may be connected to the control device 200, a Global Positioning System (GPS) or a Beidou satellite navigation system (BDS), and a positioning module in the charging pile, respectively. The control device 200 may acquire the position characteristic parameter of the position where the robot lawnmower is located from the positioning device 300, and the control device 200 may determine the target area for the robot lawnmower to mow according to the acquired position characteristic parameter. Fill electric pile and generally locate lawn border or the near region on lawn, positioning module also can be connected with global positioning system or big dipper satellite navigation system. The reference coordinate system is established by taking the position characteristic parameters of the positioning module in the charging pile as zero points, and the current position of the mowing robot can be determined by comparing the position characteristic parameters of the positioning device 300 and the position characteristic parameters of the positioning module. The boundary of the target area determined in control device 200 may be determined in the reference coordinate system by using corresponding coordinates, and the operation of the mowing robot may be limited according to the determined coordinates so that the current position of the mowing robot during mowing does not exceed the boundary of the target area.
And the driving device 500 is connected with the control device 200 and is used for realizing the movement and the stop of the mowing robot.
And the image acquisition device 400 is connected with the control device 200 and is used for capturing images of the surrounding scene when the mowing robot moves.
Referring to fig. 2, the control device 200 may include: a processor 2001 (e.g., CPU), memory 2002, and the like. The processor 2001 is connected to a memory 2002, and the memory 2002 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 2002 may alternatively be a storage device separate from the processor 2001 described previously.
The control device 200 can be respectively connected with the mowing device 100, the positioning device 300, the image capturing device 400, the driving device 500 and the like. The control device 200 can obtain the required information from the positioning device 300 and the image capturing device 400 and control the operation of the mowing device 100 and the driving device 500 according to the determined target area.
In addition, the control device 200 is further connected to a human-computer interaction device (e.g., a computer, a mobile phone, etc.), and the control device 200 may transmit the real image of the lawn in the area where the mowing robot is located to the human-computer interaction device, and obtain user feedback information returned by the human-computer interaction device based on the real image.
Those skilled in the art will appreciate that the configuration of the device shown in fig. 2 is not intended to be limiting of the device and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
It should be noted that the control device 200 may be a functional module built in and in the mowing robot; in addition, the control device 200 can also be connected with the mowing robot through a wireless communication module independently from a remote control device arranged on the mowing robot.
As shown in fig. 2, a mowing area delimiting program may be included in the memory 2002, which is a type of computer storage medium. In the control device 200 shown in fig. 2, the processor 2001 may be configured to call a mowing area delimiting program stored in the memory 2002 and perform the following related step operations of the mowing area delimiting method.
The embodiment of the invention also provides a mowing area dividing method.
Referring to fig. 3, an embodiment of the mowing area delimiting method of the present invention is provided, the mowing area delimiting method comprising:
step S10, obtaining position characteristic parameters of the mowing robot;
the position characteristic parameters are characteristic parameters of the position of the mowing robot, and can specifically include satellite positioning information, coordinate information of the mowing robot in a reference coordinate system and the like. The reference coordinate system can be a coordinate system which is pre-established by taking the position of the charging pile near the lawn or the lawn as an original point.
Before step S10, the user may place the mowing robot in or near the lawn to be mowed. When a target area of the mowing robot needs to be customized, a user can log in a preset application associated with the mowing robot through a terminal such as a mobile phone or a computer and the like, and the mowing robot enters a mowing area defining mode through sending an instruction through the preset application. When detecting that the current mode of the mowing robot is the mowing area defining mode, control device 200 executes step S10.
Step S20, acquiring a solid image of the lawn in the area where the mowing robot is located according to the position characteristic parameters;
specifically, when the location characteristic parameter may include satellite positioning information, step S20 may include:
step S21 is to acquire a satellite map corresponding to the satellite positioning information as a solid image of the lawn in the area where the mowing robot is located.
The satellite positioning information may specifically be longitude information, latitude information, and the like of the position where the mowing robot is located. The satellite positioning information of the mowing robot can be sent to a supply terminal (such as the Google Earth) of a satellite map so as to obtain the satellite map corresponding to the satellite positioning information, and the obtained satellite map comprises a lawn image in the area where the mowing robot is located and an environment image around the lawn. Directly as a solid image of the lawn in the area of the mowing robot.
In addition, a solid image of the lawn in the area of the mowing robot can be captured by other image capturing devices 400 (such as an aircraft with an image capturing function) independent of the mowing robot. After the coordinates of the mowing robot in the reference coordinate system are obtained, the shooting range of the image acquisition device 400 can be adjusted, so that the coordinates of the mowing robot are located in an area corresponding to the reference coordinate system, and the image acquired by the image acquisition device 400 is used as a field image of the lawn in the area where the mowing robot is located.
Step S30 of displaying the solid image and acquiring user feedback information returned based on the solid image;
and sending the solid image to a human-computer interaction device (such as a computer, a mobile phone and the like) provided with a preset application, and controlling the preset application to display the solid image.
After a user finds a solid image in a preset application of the man-machine interaction device, whether the solid image comprises an image area corresponding to a mowing range required to be specified by the user can be judged, if the image area does not comprise the mowing range required to be specified by the user or only part of the image area of the mowing range required to be specified by the user is located in the solid image, in order to better meet the customization requirements of the user, the user can input a position correction parameter in the preset application, determine a target position parameter according to the position characteristic parameter and the position correction parameter, and re-acquire and display the solid image of a lawn in the area where the mowing robot is located according to the target position parameter.
If the displayed solid image already includes the image area corresponding to the mowing range specified by the user, the user may select a partial area of the lawn image in the solid image and specify the characteristics of the selected area by inputting an instruction through the preset application, and the preset application may generate user feedback information according to the input instruction of the user after receiving the input instruction of the user and transmit the user feedback information to the control device 200. In particular, the characteristics of the selected zone may include a user selected mowing zone or a user selected non-mowing zone, and the like.
When a user needs to cut a certain partial area in the lawn, different instructions can be input according to the characteristics of the mowing area and the non-mowing area in the cutting and cutting requirements, for example, when the areas, required to be cut and cut, of the lawn in the solid image are concentrated or the shape of the areas, which do not need to be cut, is regular, and the like, the user can select the image area corresponding to the partial area in the solid image as the mowing area selected by the user through a preset application input instruction; in addition, when only a small part of the lawn in the solid image does not need to be cut or sheared, the user can select other image areas except the image area corresponding to the part of the image area in the solid image as the user-selected non-mowing area through the preset application input instruction; in addition, when a certain partial area of the lawn in the solid image needs to be cut and an obstacle exists in the partial area, the user can select the image area corresponding to the partial area in the solid image as the user-selected mowing area through the preset application input instruction, and simultaneously can form an obstacle marking area in the image area corresponding to the obstacle marked in the user-selected mowing area through the preset application input instruction to be used as the user-selected non-mowing area.
And step S40, determining a target area for mowing by the mowing robot according to the user feedback information.
And extracting the user-selected mowing area and/or the user-selected non-mowing area in the user feedback information, and determining a target area for mowing by the mowing robot according to the extracted solid image, the user-selected mowing area and/or the user-selected non-mowing area. Specifically, the user-selected mowing area may be set as the target area, the lawn area in the solid image other than the non-mowing area selected by the user may be set as the target area, or the area other than the non-mowing area selected by the user in the mowing area selected by the user may be set as the target area when the mowing area selected by the user overlaps the non-mowing area selected by the user.
After the target area is determined, a series of boundary coordinate points of the boundary of the target area in the reference coordinate system can be determined according to the distance between each point of the target area on the boundary in the solid image and the mowing robot and the coordinates of the mowing robot in the reference coordinate system. And generating a moving track of the mowing robot in an area enclosed by the boundary coordinate points according to a preset rule, and controlling the mowing robot to mow according to the generated moving track. In addition, the boundary coordinate points can be directly limited to the operation range of the mowing robot in the mowing process, and the current coordinate of the mowing robot in the reference coordinate system in the operation process can not exceed the area formed by the boundary coordinate points in a surrounding mode.
Wherein, the user can also specify the mowing height of the target area through the preset application. Different target areas may correspond to different mowing heights. Therefore, the mowing height corresponding to the target area in the user feedback information can be analyzed, and after the target area is determined, the mowing robot is controlled to perform mowing operation along the running track generated in the target area according to the corresponding mowing height.
In the mowing area defining method proposed in the present embodiment, a solid image of a lawn in an area where a mowing robot is located is acquired from a position characteristic parameter of the mowing robot, the solid image is displayed, and after a user sees the solid image, can input instructions according to the cutting and shearing requirements to specify a mowing area and/or a non-mowing area and the like in the lawn, acquire instructions input by a user based on the solid image to form user feedback information, the target zone for mowing by the mowing robot is determined according to the feedback information of the user, the determined target zone for mowing can meet the cutting and shearing requirements of the user without manually laying a boundary in the process, the zone dividing efficiency of the mowing robot is improved, and the user can customize and define the virtual mowing area according to different mowing requirements, and the creative space of the user is exerted to the maximum extent, so that the mowing robot can flexibly adapt to diversified mowing requirements. The position characteristic parameters can be preferably satellite positioning information, so that a solid image of a lawn in an area where the mowing robot is located can be acquired without additionally configuring other image acquisition equipment.
Further, the step of determining the target area for mowing by the mowing robot according to the user feedback information comprises:
step S41, determining an initial target area for mowing by the mowing robot according to the user feedback information to serve as a first area;
and extracting the user-selected mowing area and/or the user-selected non-mowing area in the user feedback information, and determining a first area for mowing by the mowing robot according to the extracted solid image, the user-selected mowing area and/or the user-selected non-mowing area. Specifically, the mowing area selected by the user may be set as the first area, the lawn area in the solid image other than the non-mowing area selected by the user may be set as the first area, and when the mowing area selected by the user overlaps the non-mowing area selected by the user, the area other than the non-mowing area selected by the user in the mowing area selected by the user may be set as the first area.
After the first area is obtained, the mowing robot with the preset application can be controlled to enter an area verification mode, in the area verification mode, the first area is used as a mowing target area of the mowing robot only when the first area is verified to be passed, and if the first area is not verified to be passed, prompt information can be sent to a user through the preset application so as to guarantee accuracy of the determined target area.
Step S42, acquiring a first control instruction input by a user;
in the zone verification mode, a user can input a first control instruction through a preset application to control the mowing robot to move along the boundary of a mowing zone required by the user. The first control command may specifically include forward, backward, left turn, right turn, stop, and the like. In the zone verification mode, the mowing device 100 of the mowing robot is in the off state, and mowing is not performed.
Step S43, controlling the mowing robot to move according to the first control instruction, and acquiring a first movement track of the mowing robot;
the control device 200 controls the mowing robot to move in the lawn according to the first control instruction. And continuously acquiring the position characteristic parameters of the mowing robot in the process that the mowing robot operates according to the first control instruction to obtain a first motion track of the mowing robot.
Step S44, generating a second area according to the first motion track;
the area formed by enclosing the first motion trail of the mowing robot in the coordinate system based on the global positioning system or the area formed by enclosing the first motion trail of the mowing robot in the reference coordinate system after fitting the first motion trail of the mowing robot can be used as the second area.
Step S45 of comparing the area characteristic parameters of the first area and the second area;
it should be noted that the local feature parameters of the first region and the second region should be matched to the same reference frame for comparison, as in a reference coordinate system or as in a coordinate system corresponding to a global positioning system. The region feature parameters may specifically include region location feature points, region area and/or region shape, and the like. The step of comparing the regional characteristic parameters of the first region and the second region may specifically include: determining an area difference between the area of the first region and the area of the second region, and/or determining a similarity of the shape of the first region and the shape of the second region, and/or determining a first distance between the position feature point of the first region and the position feature point of the second region, and/or the like.
The position characteristic points can be selected according to actual requirements. For example, the midpoint between the first region and the second region may be identified as the position feature point, or two points that are adjacent to each other in the same straight line direction and are farthest from each other at the boundary between the first region and the second region may be identified as the position feature point.
And step S46, when the area characteristic parameter satisfies a preset condition, taking the first area as the target area.
And when the area difference is smaller than or equal to a preset area difference threshold value, and/or when the similarity is larger than or equal to a preset similarity threshold value, and/or when the first distance is smaller than or equal to a first preset distance threshold value, judging that the area characteristic parameter meets a preset condition, and taking the first area as the target area.
In this embodiment, after determining an initial target area according to user feedback information to obtain a first area, a user may form a second area by sending a first control instruction to operate according to a boundary of the target area to be cut and trimmed, verify whether the first area matches with an actual cutting demand of the user by comparing area characteristic parameters of the first area and the second area, and when the area characteristic parameters meet a preset condition, it is indicated that the first area matches with the actual cutting demand of the user, and then use the first area as a target area for cutting grass by the mowing robot, thereby ensuring accuracy of the defined target area.
The region characteristic parameters can preferably include region position characteristic points, region areas and region shapes, and when the region position characteristic points, the region areas and the region shapes of the first region and the second region meet corresponding preset conditions, the first region is taken as a target region; when at least one of the area position feature points, the area areas and the area shapes of the first area and the second area does not meet corresponding preset conditions, prompt information can be sent to a user to prompt the user to select the area again based on the solid image, and therefore the accuracy of the defined target area is further improved.
Specifically, the step of determining the initial target area for mowing by the mowing robot according to the user feedback information comprises:
step S411, extracting area boundary identification information in the user feedback information;
when the user selects the region based on the display image, the region boundary identification information can be formed by inputting a control instruction through a preset application. The type of region boundary identification information may specifically include a point, a line, or a border, etc. that the user marks at intervals along the region boundary selected by the user based on the solid image. Therefore, the user feedback information may specifically include the area boundary identification information. The selected zones with different zone characteristics correspond to different zone boundary identification information, for example, the zone selected by the user to mow corresponds to the first zone boundary identification information, and the zone selected by the user to not mow corresponds to the second zone boundary identification information. Thus, the user feedback information may also include area boundary identification information for different area characteristics.
Step S412, generating the initial target area according to the area boundary identification information.
Specifically, when the area identification information is area boundary identification information corresponding to a mowing area selected by a user, a closed area is generated according to the area boundary identification information in a fitting mode and serves as an initial target area; and when the area identification information is the area boundary identification information corresponding to the non-mowing area selected by the user, generating a closed area according to the area boundary identification information in a fitting manner, and taking the area outside the closed area in the solid image as an initial target area. If the user feedback information includes zone boundary identification information of different zone characteristics (such as when the user feedback information includes zone boundary identification information corresponding to a mowing zone selected by the user and a non-mowing zone selected by the user), corresponding closed zones can be respectively generated according to the zone boundary identification information corresponding to the different zone characteristics, and an initial target zone is determined according to the generated closed zones.
Through the mode, the user can directly demarcate the mowing range required by the user in the solid image to generate the corresponding area boundary identification information, and the area boundary identification information in the user feedback information is extracted to determine the initial target area, so that the target area can be flexibly demarcated according to the user requirement.
Specifically, referring to fig. 5, the area boundary identification information includes mowing area boundary identification information and obstacle area boundary identification information, and the step of generating the initial target area according to the area boundary identification information includes:
step S4121, generating a mowing area selected by a user according to the mowing area boundary identification information, and generating an obstacle marking area according to the obstacle area boundary identification information;
generating a corresponding closed area according to the boundary identification information of the mowing area, and performing processes such as boundary smoothing and the like on the generated closed area to form a mowing area selected by a user; and generating a corresponding closed region according to the boundary identification information of the obstacle region, and performing processes such as boundary smoothing and the like on the generated closed region to form an obstacle marking region.
The mowing area boundary identification information and the obstacle area boundary identification information can be the same as points, graphs or frames marked at intervals along the area boundary. In addition, the mowing area boundary identification information and the obstacle area boundary identification information can be set to be different types according to actual requirements, for example, the mowing area boundary identification information can be a mark point, and the obstacle area boundary identification information can be a graph line and the like.
Step S4122, determining the initial target zone according to the user selected mowing zone and the obstacle marking zone.
Wherein, the overlapping area of the obstacle marking area and the user-selected mowing area can be determined, and the other areas of the user-selected mowing area except the overlapping area are used as the initial target area.
By the mode, a user can visually judge and mark the obstacle in the lawn through the field image, so that on one hand, the accuracy of the determined initial target area can be ensured, and the accuracy of path planning of the follow-up mowing robot is improved; on the other hand, the obstacle identification accuracy of the mowing robot is improved, and the stable operation of the mowing robot is ensured.
Specifically, referring to fig. 6, when the mowing area boundary identification information is a graph, the step of generating the mowing area selected by the user according to the mowing area boundary identification information includes:
step S401, judging whether the graph lines enclose to form a closed area;
if the closed region is not enclosed, executing step S402 and step S403; if a closed region is enclosed, step S404 is executed.
Step S402, when the graph line is a line, determining a second distance between two end points of the graph line;
when the graph is a line, two end points of the graph can be identified and a second distance between the two end points can be calculated.
Step S403, when the second distance is smaller than or equal to a second preset distance threshold, connecting the two endpoints by using a line segment, and using a closed area formed by enclosing the line segment and the graph as the mowing area selected by the user.
And step S404, taking the closed area enclosed by the graph as the mowing area selected by the user.
The second preset threshold value can be set according to actual requirements. When the line is a line but does not enclose the closed area, indicating that the user demarcates the area on the solid image, the boundary identification information of the mowing area input by the user can not form the closed area completely fitting the requirements of the mowing shears due to the influence of operation deviation or other factors. Therefore, whether the second distance is smaller than or equal to a second preset distance threshold needs to be judged, if the second distance is smaller than or equal to the second preset distance threshold, the area shape of the closed area formed by connecting the two end points by adopting the line segment can be considered to meet the cutting and shearing requirements of the user, and at the moment, the closed area formed by enclosing the line segment and the graph can be used as the mowing area selected by the user; if the second distance is greater than the second preset distance threshold, it can be considered that the shape difference between the obtained closed area and the mowing area required by the user is large even if the line segment is used to connect the two end points of the graph. At the moment, prompt information can be sent to prompt the user to re-input the instruction for carrying out region division. The accuracy of the finally defined target area is ensured through the method.
If the graph is a plurality of lines, a prompt message can be sent to prompt the user to re-input the instruction for zone division, a third distance between the end points of the two adjacent lines can also be determined, if all the third distances in the plurality of lines are smaller than or equal to a second preset distance threshold value, the end points of the two adjacent lines can be connected end to end by line segments, and a closed zone formed by enclosing the end points can be used as a zone for the user to select to mow.
In addition, when the type of the obstacle area boundary identification information is a graph, the obstacle marking area may be determined by analogy with reference to S401 to S404, which is not described herein again.
Referring to fig. 7, before the step of displaying the solid image and acquiring the user feedback information returned based on the solid image, the method further includes:
step S01, judging whether the image quality of the solid image meets the preset requirement;
if yes, executing the step S30 and the step S40; if not, the steps S02, S03 and S04 are executed.
Specifically, it may be determined whether the definition of the solid image meets a preset definition requirement, and if so, the steps S30 and S40 are performed; if not, the steps S02, S03 and S04 are executed.
Step S02, acquiring a second control instruction input by the user;
and when the image quality of the solid image does not meet the preset requirement, the mowing robot can be controlled to enter a manual debugging area mode, and a target area is generated according to a second control instruction input by a user and the information collected by the mowing robot in the mode. Specifically, the user can input a second control instruction through the preset application to control the mowing robot to move along the boundary of the mowing area required by the user. The second control command may specifically include forward, backward, left turn, right turn, stop, and the like. In the manual adjustment area mode, the mowing device 100 of the mowing robot is in the off state, and mowing is not performed.
Step S03, controlling the mowing robot to move according to the second control instruction, and acquiring a second movement track of the mowing robot;
the control device 200 controls the mowing robot to move in the lawn according to the second control instruction, and meanwhile, the control device 200 can control the image acquisition device 400 to acquire the environment image information. The environment image information may specifically include an image of an object in the environment in which the mowing robot is located, depth information of each object, and the like. And continuously acquiring the position characteristic parameters of the mowing robot in the process that the mowing robot operates according to the second control instruction to obtain a second motion track of the mowing robot.
And step S04, determining the target area according to the second motion track.
The area formed by enclosing the second motion trail of the mowing robot in the coordinate system based on the global positioning system or the area formed by enclosing the second motion trail of the mowing robot in the reference coordinate system can be used as the target area.
And constructing a running map of the mowing robot according to the environment image information, fitting a second motion track of the mowing robot into the constructed running map, and taking a closed area enclosed by the second motion track in the running map as a target area. And after the target area is determined, the running map and the second motion trail in the map can be displayed for the user to confirm.
By the mode, when the image quality of the on-site image is poor, the target area meeting the mowing requirement of the user can be defined, and the definition accuracy of the target area is ensured.
Referring to fig. 8, after the step of displaying the solid image and acquiring the user feedback information returned based on the solid image, the method further includes:
step S60, acquiring the current track generation mode of the mowing robot;
the track generation mode may specifically include a first mode and a second mode, etc. When the mowing robot is in the first mode, a user can customize a traveling path of the mowing robot; when the robot lawnmower is in the second mode, control device 200 plans the travel path of the robot lawnmower within the target area according to the preset rules. A user can input an instruction through a preset application to switch the current track generation mode of the mowing robot, and the instruction which conforms to the cutting and shearing requirements is input according to the requirement of the user in the track generation mode to generate user feedback information.
And executing the steps S70 and S80 when the current track generation mode of the mowing robot is in the first mode.
Step S70, extracting the user-specified path in the user feedback information;
the user can generate a user-specified path in the live-action image in a human-computer interaction device in a sliding operation mode and the like, and the generated user-specified path forms user feedback information. Accordingly, the control device 200 may extract the user-specified path from the user feedback information in the first mode.
Step S80, determining the traveling path of the mowing robot according to the path designated by the user;
and taking the path designated by the user as the traveling path of the mowing robot. Specifically, coordinate parameters of each point in the path specified by the user can be determined based on the reference coordinate system, and the mowing robot is controlled to run according to the determined coordinate parameters, so that the mowing robot can run according to the path specified by the user.
And executing the steps S40 and S50 when the current track generation mode of the mowing robot is in the second mode.
And step S50, generating a driving path of the mowing robot in the target area according to a preset rule.
The preset rule may be set according to actual requirements, for example, a plurality of parallel paths that are arranged at intervals and are connected end to end may be generated in the target area along a preset direction.
Through the mode, the lawn mowing robot is beneficial to patterning the lawn according to the requirements of a user, and the flexibility of the mowing robot is further improved to meet the diversified requirements of the user.
Furthermore, an embodiment of the present invention further provides a computer-readable storage medium, on which a mowing area defining program is stored, which, when executed by a processor, implements operations of relevant steps of the mowing area defining method in the above embodiment.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.