CN117785003A - Screenshot management method, screenshot management device, computer equipment and medium - Google Patents
Screenshot management method, screenshot management device, computer equipment and medium Download PDFInfo
- Publication number
- CN117785003A CN117785003A CN202311789536.1A CN202311789536A CN117785003A CN 117785003 A CN117785003 A CN 117785003A CN 202311789536 A CN202311789536 A CN 202311789536A CN 117785003 A CN117785003 A CN 117785003A
- Authority
- CN
- China
- Prior art keywords
- image
- dimensional model
- user
- visual interface
- displaying
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Processing Or Creating Images (AREA)
Abstract
The application provides a screenshot management method, a screenshot management device, computer equipment and a screenshot management medium, wherein the screenshot management method comprises the following steps: responding to a screenshot instruction sent by a user, and intercepting an image of a designated area in a visual interface, wherein the designated area comprises at least one part of a three-dimensional model; binding the image with pose parameters of the three-dimensional model, and storing the image into an image list; the image list comprises a plurality of images, and the images represent two-dimensional images of the three-dimensional model under one view angle; responding to an operation instruction of a user for a specified image in the image list, and displaying the three-dimensional model on the visual interface after rotating the three-dimensional model to a view angle corresponding to the specified image; and the view angle corresponding to the designated image is determined by the pose parameters of the three-dimensional model bound by the image. Through the method, the users can intuitively communicate with the three-dimensional model displayed on the visual interface, so that the content with high professionality can be understood more easily, and the communication efficiency is improved.
Description
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and apparatus for managing a screenshot, a computer device, and a medium.
Background
In some scenes, when users communicate with each other according to two-dimensional images, because a certain party lacks corresponding expertise, the content of explaining the two-dimensional images is difficult to understand, and the communication efficiency is low.
For example, in a medical scenario, a typical physician communicates with a patient about a paper version of the medical image, but the patient lacks certain expertise, which makes it difficult for the physician to understand what the physician is describing the medical image. The physician is not well aware of the needs and desires of the patient and it is difficult to assist the patient in understanding the treatment regimen and the desired outcome.
Disclosure of Invention
In view of the foregoing, the present application provides a screenshot management method, apparatus, computer device and medium, so as to solve the deficiencies in the related art.
In order to achieve the above purpose, the present application provides the following technical solutions:
according to a first aspect of an embodiment of the present invention, there is provided a screenshot management method, including:
responding to a screenshot instruction sent by a user, and intercepting an image of a designated area in a visual interface, wherein the designated area comprises at least one part of a three-dimensional model;
binding the image with pose parameters of the three-dimensional model, and storing the image into an image list; the image list comprises a plurality of images, and the images represent two-dimensional images of the three-dimensional model under one view angle;
responding to an operation instruction of a user for a specified image in the image list, and displaying the three-dimensional model on the visual interface after rotating the three-dimensional model to a view angle corresponding to the specified image; and the view angle corresponding to the designated image is determined by the pose parameters of the three-dimensional model bound by the image.
According to a second aspect of an embodiment of the present invention, there is provided a screenshot managing apparatus including:
the model screenshot module is used for responding to a screenshot instruction sent by a user and intercepting an image of a designated area in the visual interface, wherein the designated area comprises at least one part of a three-dimensional model;
the image storage module is used for binding the image with pose parameters of the three-dimensional model and storing the image into an image list; the image list comprises a plurality of images, and the images represent two-dimensional images of the three-dimensional model under one view angle;
the model rotating module is used for responding to an operation instruction of a user for the specified image in the image list, rotating the three-dimensional model to the view angle corresponding to the specified image and displaying the three-dimensional model on the visual interface; and the view angle corresponding to the designated image is determined by the pose parameters of the three-dimensional model bound by the image.
According to a third aspect of embodiments of the present invention, there is provided a computer device comprising a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor, the processor being caused by the machine-executable instructions to: the method of the first aspect is performed.
According to a fourth aspect of embodiments of the present invention, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method of the first aspect.
According to the technical scheme, the intercepted image comprising at least one part of the three-dimensional model is bound with pose parameters of the three-dimensional model and stored in the image list, so that the three-dimensional model rotated to the view angle corresponding to the appointed image can be displayed on the visual interface in response to the operation instruction of the user aiming at the appointed image in the image list, the three-dimensional model displayed on the visual interface can be intuitively communicated between the users, the content with high professionality can be understood more easily, and the communication efficiency is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
FIG. 1 is a schematic diagram illustrating a screenshot management method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a visual interface shown in accordance with an embodiment of the present application;
FIG. 3 is a schematic diagram of another visual interface shown according to an embodiment of the present application;
FIG. 4 is a schematic diagram of another visual interface shown according to an embodiment of the present application;
FIG. 5 is a schematic diagram of another visual interface shown according to an embodiment of the present application;
FIG. 6 is a block diagram of a screenshot managing apparatus according to an embodiment of the present application;
fig. 7 is a schematic diagram of a hardware structure of a computer device according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present application as detailed in the accompanying claims.
The terminology used in the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the present application. As used in this application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, a first message may also be referred to as a second message, and similarly, a second message may also be referred to as a first message, without departing from the scope of the present application. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "responsive to a determination", depending on the context.
In some scenes, when users communicate with each other according to two-dimensional images, because a certain party lacks corresponding expertise, the content of explaining the two-dimensional images is difficult to understand, and the communication efficiency is low. For example, when a doctor communicates with a patient, the doctor communicates with the patient with respect to the paper version of the medical image, but the patient lacks a certain expertise, so that the doctor cannot understand the content of the medical image; engineers communicate with customers about the paper version of the design, but because customers lack certain expertise, the engineers have difficulty in understanding what the engineers are describing the design; the teacher can teach pictures of cultural relics, artworks or buildings, but students lack certain expertise, so that the teacher cannot understand the explanation content of the pictures.
FIG. 1 is a flow chart illustrating a method of screenshot management according to an embodiment of the present application, including:
step S101, responding to a screenshot instruction sent by a user, and intercepting an image of a designated area in a visual interface, wherein the designated area comprises at least one part of a three-dimensional model.
Step S102, binding the image with pose parameters of the three-dimensional model, and storing the image into an image list; the image list includes a number of images that characterize a two-dimensional image of the three-dimensional model at one of the viewing angles.
Step S103, responding to an operation instruction of a user for a specified image in the image list, and displaying the three-dimensional model on the visual interface after rotating the three-dimensional model to a view angle corresponding to the specified image; and the view angle corresponding to the designated image is determined by the pose parameters of the three-dimensional model bound by the image.
In this embodiment, the three-dimensional model may be obtained by scanning a three-dimensional scanning device such as an intraoral scanner, an extraoral scanner, a facial scanner, and a professional scanner, and the three-dimensional model may include a three-dimensional tooth model, a three-dimensional facial model (face model), a human body or other organ model, an industrial product model, an industrial equipment model, a cultural relic model, an artwork model, a prosthetic model, a medical instrument model, a building model, and the like, which is not limited in this application.
In this embodiment, the screenshot instruction sent by the user may refer to an instruction sent by the user when the user triggers a control for screenshot on the visual interface, or may refer to a voice instruction, a gesture instruction, a remote control instruction or the like sent by the user for indicating screenshot.
In this embodiment, the designated area may be the whole visual interface on which the three-dimensional model is displayed, or may be an area corresponding to a default display position of the three-dimensional model in the visual interface, or may be an area selected by the user on the visual interface for the three-dimensional model, which is not limited in this application.
In this embodiment, the three-dimensional model may be displayed on the visual interface in advance, and the process of model rotation is displayed to the user, so that the interactive visual experience of the user may be improved.
In this embodiment, the pose parameters of the three-dimensional model may include an angle parameter, a position parameter, and a scale parameter, where the angle parameter may be represented by a quaternion, an euler angle, or a rotation vector, etc., to determine the pose of the three-dimensional model in the three-dimensional space; the position parameter can be represented by three-dimensional coordinates to determine the position of the three-dimensional model in three-dimensional space; the scaling parameters may be represented by scaling factors that may include scaling of the three-dimensional model in the X-axis, Y-axis, and Z-axis directions of the three-dimensional coordinate system for scaling down or scaling up the three-dimensional model.
In this embodiment, the three-dimensional model may be further enlarged or reduced to a scale corresponding to the specified image, and/or the tooth model may be moved to a position corresponding to the specified image. The proportion corresponding to the image can be determined according to the proportion parameter of the three-dimensional model bound by the image, and the position corresponding to the image can be determined according to the position parameter of the three-dimensional model bound by the image.
Illustratively, as shown in fig. 2 and 3, a three-dimensional tooth model, an image list containing image 1 and image 2, is presented on the visual interface, and fig. 2 shows the three-dimensional tooth model in an initial pose. In response to an operation instruction of a user for the image 2, the three-dimensional tooth model in the initial posture is rotated to a view angle corresponding to the image 2, the three-dimensional tooth model is enlarged according to the proportional parameters of the three-dimensional tooth model bound by the image 2, the three-dimensional tooth model after posture change shown in fig. 3 is displayed, and the position of the center point of the image 2 on the three-dimensional tooth model can be marked so as to highlight the suspected tooth lesion area in the image 2.
According to the embodiment, the intercepted image comprising at least one part of the three-dimensional model is bound with pose parameters of the three-dimensional model and stored in the image list, so that the three-dimensional model rotated to the view angle corresponding to the appointed image can be displayed on the visual interface in response to the operation instruction of the user for the appointed image in the image list, the three-dimensional model displayed on the visual interface can be intuitively communicated among the users, the content with high professionality can be understood more easily, and the communication efficiency is improved.
In an embodiment, the user may communicate with respect to the three-dimensional model displayed by the visual interface, and may record the communication content to generate a communication record, where the type of the communication record may include text record, audio record, video record, and the like, which is not limited in this application.
In this embodiment, the semantic recognition may be performed on the communication record to obtain a semantic recognition result, and a communication summary may be generated based on the semantic recognition result, where the communication summary may include summary information, communication time, participants, and speech records of each participant. The abstract information can comprise a plurality of keywords, the position of the region of interest of the user existing on the three-dimensional model can be determined through the keywords, and at the moment, the region of interest in the three-dimensional model can be subjected to screenshot and saved in an image list.
In this embodiment, the description information of the region of interest may also be determined by the keyword based on the semantic recognition result, and the inspection report may be generated based on the image on which the region of interest is displayed and the description information of the corresponding region of interest. The keywords extracted from the semantic recognition result may be different for different types of three-dimensional models.
For example, for the three-dimensional tooth model, the keywords may be "No. 1 tooth has a problem, there is caries", "No. 5 tooth has a problem", etc., the tooth area corresponding to which tooth position number of the three-dimensional tooth model the region of interest is located in may be determined by the keywords, the description information of the region of interest may include the health condition of the tooth corresponding to the tooth position number, the region of interest having a health problem may be referred to as a suspected lesion area, and the description information of the region of interest may also include the name of the oral disease corresponding to the suspected lesion area. The type of examination report generated may be an oral health examination report. If the image of the region of interest has a corresponding near infrared image, the image of the region of interest and the near infrared image can be selected and used for generating an oral health examination report.
For example, for the three-dimensional face model, the keywords may be "the nose tip point is located down", "the nose has blackheads", "the inter-eye distances are too large", etc., the region of interest may be determined in which five sense organs are located in the three-dimensional face model by the keywords, the description information of the region of interest may include the problem existing in the region of the five sense organs, and the generated inspection report may be of the type of the face inspection report.
For example, for an industrial product model, the keywords may be "the size of the central hole of the gear is smaller", "the top end of the central shaft is worn", etc., the region where the part of the industrial product model is located may be determined by the keywords, the description information of the region of interest may include the problem in the region where the part is located, and the type of the generated inspection report may be the industrial product inspection report.
In this embodiment, the semantic recognition may be performed based on natural language processing models, such as GPT (generated Pre-trained Transformers, converter-based generated Pre-training model), OPT (Open Pre-trained Transformers, open converter-based Pre-training model), LLaMA (Large Language Model Meta AI, meta artificial intelligence large language model), and the like; corresponding inspection reports may also be generated for other kinds of three-dimensional models, as this application is not limited in this regard. In an embodiment, the screenshot management method may further include:
step S103, responding to an operation instruction of a user for the intercepted image, and displaying at least one control on a visual interface, wherein the control comprises the following components:
the first control K1 is used for enabling a user to trigger the operation of label management on the intercepted image;
a second control K2 for causing the user to trigger an operation of switching to a mode allowing the user to mark the intercepted image; wherein the second control K2 at least comprises at least one of the following sub-controls:
a first sub-control K21 for enabling a user to trigger an operation of adjusting the color of the mark.
A second sub-control K22 for enabling a user to trigger an operation of adjusting the width of the mark.
And a third sub-control K23, configured to enable a user to trigger an operation for adjusting the timeliness of the mark.
In this embodiment, performing label management on the intercepted image may include operations of generating a label, editing the label, deleting the label, and the like on the intercepted image, and the label may include a text label, a pattern label, and the like, which is not limited in this application.
In this embodiment, the user may adjust the timeliness of the mark on the image by triggering the third sub-control, hiding or continuing to display the mark when a specified time (e.g., 3 s) is reached.
In this embodiment, the visual interface may further display a third control K3 (not shown in the figure) for enabling the user to trigger an operation of switching the display state of the image list. The visual interface may also display an image switching control for the image list, for enabling a user to trigger an operation of switching the image currently displayed on the visual interface in the image list.
In this embodiment, at least one of the following controls may also be presented on the visual interface:
a fourth control K4 for causing the user to trigger an operation of switching to a mode that allows the user to erase the mark on the intercepted image;
a fifth control K5 for enabling the user to trigger an operation of canceling the last operation;
a sixth control K6 for causing the user to trigger an operation of re-executing the operation that was withdrawn last time;
a seventh control K7, configured to enable a user to trigger an operation of hiding a control displayed on the visual interface after responding to an operation instruction of the user for the intercepted image;
an eighth control K8 for causing the user to trigger an operation of saving the marked truncated image to the image list.
In the application, the control can be any form of control such as a button, a sliding block and the like, and the control can be located at any position of the visual interface, and descriptive information can be added to the control for user identification, so that the application is not limited in this respect.
In an embodiment, after capturing the image of the specified area in the visual interface, the screenshot management method may further include:
step S104: the intercepted image is reduced and then displayed in a preset area of a visual interface;
the operation instruction of the user for the intercepted image in step S103 may be an operation instruction triggered by the user for the reduced intercepted image in a predetermined time.
In this embodiment, the screenshot management method may further include:
step S105, when an operation instruction triggered by the user for the zoomed out captured image is not received within a predetermined time, and/or an operation instruction triggered by the user for an area other than the zoomed out captured image on the visual interface is received, the zoomed out captured image is hidden.
Exemplary, as shown in fig. 4, after the truncated image is reduced, the area in the lower right corner of the visual interface is displayed, after the user triggers the reduced truncated image within 3 seconds, as shown in fig. 5, the controls are displayed on the visual interface, and the reduced truncated image may be further enlarged and displayed at a designated position of the visual interface, so that the user performs operations such as marking, label management and the like on the truncated image.
In an embodiment, the operation instruction of the user for the intercepted image in step S103 may be an operation instruction triggered by the user for the intercepted image in the image list.
In an embodiment, the designating image may further include displaying the image with the marking information, and when responding to the operation instruction of the user for designating the image in the image list in step S102, the designating image may further include:
in step S1021, the specified image is determined to be the image on which the marker information is displayed.
In step S102, after the three-dimensional model is rotated to the view angle corresponding to the specified image, displaying the three-dimensional model on the visual interface further includes:
step S1022, the marking information is displayed on the visual interface.
In an embodiment, the image list may further include an image displayed with the tag information, and the image displayed with the tag information may be displayed on the visual interface in response to an operation instruction of the user for the image displayed with the tag information in the image list.
In an embodiment, the images in the image list can be screened, only the images with the marking information are displayed, and the user can trigger the operation of switching the images with the marking information currently displayed on the visual interface through the image switching control for the image list, so that the communication efficiency is improved.
In an embodiment, when the three-dimensional model is a three-dimensional tooth model or a three-dimensional face model, the specified image may include an image displaying a suspected tooth lesion region, the suspected tooth lesion region, and an oral disease name corresponding to the suspected tooth lesion region, and the result of identifying the three-dimensional model may be obtained based on a pre-trained oral examination model, and/or based on a result of identifying the image by the user.
In an embodiment, the step of obtaining the suspected dental lesion area and the name of the oral disease corresponding to the suspected dental lesion area based on the result of identifying the three-dimensional model by the pre-trained oral examination model may include:
identifying a plurality of suspected tooth lesion areas of the three-dimensional model based on a pre-trained oral cavity detection model, and obtaining a confidence coefficient set of each suspected tooth lesion area corresponding to a plurality of oral diseases;
for each suspected tooth lesion area, determining a candidate oral disease list corresponding to the suspected tooth lesion area according to a disease confidence coefficient set or according to an oral disease classification sequence preset by the disease confidence coefficient set and a target oral health examination report template;
and determining the oral disease name corresponding to the suspected dental lesion area from the candidate oral disease list according to a preset rule and/or in response to a selection operation of a user.
In this embodiment, when a plurality of suspected tooth lesion areas of the three-dimensional model are identified based on the pre-trained oral cavity detection model, a screenshot may be performed for the suspected tooth lesion areas in the three-dimensional model, and stored in an image list.
In the present embodiment, the oral cavity detection model may include various machine learning models for identifying oral cavity diseases, such as periodontal disease identification model, caries identification model, and tooth width measurement model, which are not limited in this application.
The target oral cavity health examination report template can be obtained from the server based on the template ID in response to a template obtaining instruction carrying the template ID. The server may be configured to provide a set of oral health examination report templates, where the set of oral health examination report templates includes a plurality of oral health examination report templates generated for a same oral disease, and the oral health examination report templates define candidate names of the oral disease, where the candidate names are generic names or custom names.
In the present application, the server may include a cloud server, a local server, and the like, which is not limited in this application.
In this embodiment, the name of the oral disease corresponding to the suspected dental lesion area may be determined from the candidate oral disease list according to a preset rule and/or in response to a selection operation of the user, and then the candidate name corresponding to the oral disease defined in the target oral health inspection report template may be used as the name of the oral disease corresponding to the suspected dental lesion area that is currently identified. Wherein, the preset rule can be to select the N-th candidate oral disease in the candidate oral disease list by default; the selection operation of the user may refer to that the user directly selects a certain candidate oral disease in the candidate oral disease list, or may refer to that the user selects a specific rule, where the specific rule may be to automatically select the first candidate post-disease in the candidate oral disease list corresponding to each suspected dental lesion area.
In an embodiment, the step of determining a list of candidate oral diseases corresponding to the suspected dental lesion area according to the set of disease confidence values may include:
from the disease confidence coefficient set, determining a plurality of oral diseases corresponding to a plurality of disease confidence coefficients with high top N as elements of a candidate oral disease list; wherein N is a positive integer.
Specifically, for example, if the oral diseases corresponding to the confidence level higher than the first 3 in the disease confidence level set are respectively heavy dentition crowding, anterior dentition inverse fitting and anterior deep coverage, the foregoing oral diseases may be used as elements of the candidate oral disease list, that is, the candidate oral diseases.
In an embodiment, the step of determining the candidate oral disease list corresponding to the suspected dental lesion area according to the set of disease confidence levels and the oral disease classification sequence preset by the target oral health inspection report template may include:
from the disease confidence coefficient set, determining a plurality of oral diseases corresponding to a plurality of disease confidence coefficients with high top N as elements of a candidate oral disease list; wherein N is a positive integer and is less than the preset length of the candidate oral disease list;
determining a plurality of same-category oral diseases corresponding to the first candidate oral diseases as elements of a candidate oral disease list according to an oral disease classification sequence preset in a target oral health examination report template; the first candidate oral disease is an oral disease corresponding to the Nth high disease confidence in the disease confidence set.
In this embodiment, besides obtaining the candidate oral diseases corresponding to the suspected tooth lesion areas according to the confidence, the oral diseases corresponding to the nth high confidence may be selected as elements of the candidate oral disease list, that is, the candidate oral diseases according to the oral disease classification sequence preset in the target oral health inspection report template. The oral diseases of the same class may refer to oral diseases belonging to the same class as the first candidate oral diseases, or may refer to other oral diseases of the upper M class (M is a positive integer) of the class corresponding to the first candidate oral diseases, which is not limited in this application.
Specifically, for example, the preset length of the candidate oral disease list is 3, and the oral disease corresponding to the highest confidence is determined to be serious dentition crowding from the disease confidence set, and the serious dentition crowding is used as an element of the candidate oral disease list. At this time, if the elements in the candidate oral disease list are less than 3, the classification of serious dental congestion belonging to malocclusion is further determined according to the preset oral disease classification sequence in the target oral health examination report template, and at this time, the oral diseases of the same level, such as anterior teeth inverse fit and anterior teeth deep coverage, can be preferentially selected as the elements in the candidate oral disease list, so that the number of the elements in the candidate oral disease list meets the preset length requirement. If the number of the oral diseases at the same stage is insufficient, other oral diseases can be searched step by step upwards to be used as elements in the candidate oral disease list until the number of the elements in the candidate oral disease list meets the preset length requirement.
In this embodiment, the preset oral disease list may also be determined according to an oral disease classification order preset in the target oral health inspection report template, and in response to a selection operation by the user, an oral disease may be determined from the preset oral disease list as an element of the candidate oral disease list.
In an embodiment, the step of obtaining the suspected dental lesion area and the oral disease name corresponding to the suspected dental lesion area may include:
determining a plurality of suspected dental lesion areas of an image marked by a user, and determining the name of the oral disease corresponding to each suspected dental lesion area according to the oral disease selected by the user in a preset oral disease list; the preset oral disease list is determined according to an oral disease classification sequence preset in the target oral health examination report template.
In this embodiment, the user may mark the suspected dental lesion area of the image by selecting a tooth position number corresponding to the suspected dental lesion area.
Fig. 6 is a block diagram of a screenshot managing apparatus according to an embodiment of the present application, including:
a model screenshot module 11, configured to intercept an image of a specified area in the visual interface in response to a screenshot instruction issued by a user, where the specified area includes at least a portion of a three-dimensional model;
an image storage module 12, configured to bind the image with pose parameters of the three-dimensional model, and store the image in an image list; the image list comprises a plurality of images, and the images represent two-dimensional images of the three-dimensional model under one view angle;
the model rotating module 13 is configured to respond to an operation instruction of a user for a specified image in the image list, and display the three-dimensional model on the visual interface after rotating the three-dimensional model to a view angle corresponding to the specified image; and the view angle corresponding to the designated image is determined by the pose parameters of the three-dimensional model bound by the image.
In an embodiment, the specified image may include a plurality of images displaying a region of interest, the capturing timing of the images displaying the suspected dental lesion region may include determining, based on a semantic recognition result of the communication record for the three-dimensional model, a timing at which the region of interest exists in the three-dimensional model:
a semantic recognition module 14, configured to determine description information of the region of interest based on the semantic recognition result;
the report generating module 15 is configured to generate an inspection report based on the image displaying the region of interest and the corresponding description information.
In an embodiment, the screenshot managing apparatus may further include:
the control showing module is used for responding to the operation instruction of the user for the intercepted image and showing at least one of the following controls on the visual interface:
the first control K1 is used for enabling a user to trigger the operation of label management on the intercepted image;
a second control K2 for causing the user to trigger an operation of switching to a mode allowing the user to mark the intercepted image; wherein the second control K2 at least comprises at least one of the following sub-controls:
a first sub-control K21 for enabling a user to trigger an operation of adjusting the color of the mark.
A second sub-control K22 for enabling a user to trigger an operation of adjusting the width of the mark.
And a third sub-control K23, configured to enable a user to trigger an operation for adjusting the timeliness of the mark.
In an embodiment, the screenshot managing apparatus may further include:
the image adjustment module 16 is configured to display the zoomed-out image in a preset area of the visual interface;
the operation instruction of the user for the intercepted image in the model rotation module 13 may be an operation instruction triggered by the user for the shortened intercepted image in a predetermined time.
In an embodiment, the designated image may further include an image displaying the marking information, and the model rotation module 13 may be further configured to determine that the designated image is the image displaying the marking information, and display the marking information on the visual interface.
Fig. 7 is a schematic diagram of a hardware structure of a computer device according to an embodiment of the present disclosure. The computer device may include a processor 701, a machine-readable storage medium 702 storing machine-executable instructions. The processor 701 and the machine-readable storage medium 702 may communicate via a system bus 703. Also, the processor 701 may perform the screenshot management methods described above by reading and executing machine-executable instructions in the machine-readable storage medium 702 corresponding to the screenshot management logic.
The machine-readable storage medium 702 referred to herein may be any electronic, magnetic, optical, or other physical storage device that may contain or store information, such as executable instructions, data, or the like. For example, the machine-readable storage medium 702 may include at least one of the following: volatile memory, nonvolatile memory, other types of storage media. The volatile memory may be RAM (Random Access Memory ), and the nonvolatile memory may be flash memory, a storage drive (e.g., a hard disk drive), a solid state disk, a storage disk (e.g., an optical disk, a DVD, etc.).
Based on the method of any of the above embodiments, the present disclosure further provides a computer readable storage medium, where a computer program is stored, where the computer program when executed by a processor is configured to perform the screenshot management method of any of the above embodiments.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. A typical implementation device is a computer, which may be in the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email device, game console, tablet computer, wearable device, or a combination of any of these devices.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
The foregoing describes specific embodiments of the present application. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
Where a description of "a particular example", or "some examples", etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this application, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It is to be understood that the present application is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the application is limited only by the appended claims.
The foregoing description of the preferred embodiments of the present invention is not intended to limit the invention to the precise form disclosed, and any modifications, equivalents, improvements and alternatives falling within the spirit and principles of the present invention are intended to be included within the scope of the present invention.
Claims (10)
1. A screenshot management method, comprising:
responding to a screenshot instruction sent by a user, and intercepting an image of a designated area in a visual interface, wherein the designated area comprises at least one part of a three-dimensional model;
binding the image with pose parameters of the three-dimensional model, and storing the image into an image list; the image list comprises a plurality of images, and the images represent two-dimensional images of the three-dimensional model under one view angle;
responding to an operation instruction of a user for a specified image in the image list, and displaying the three-dimensional model on the visual interface after rotating the three-dimensional model to a view angle corresponding to the specified image; and the view angle corresponding to the designated image is determined by the pose parameters of the three-dimensional model bound by the image.
2. The method of claim 1, wherein the designated image comprises a number of images displaying a region of interest; the capturing time of the image displaying the region of interest comprises the step of determining the moment when the region of interest exists in the three-dimensional model based on the semantic recognition result of the communication record aiming at the three-dimensional model.
3. The method as recited in claim 2, further comprising:
determining the description information of the region of interest based on the semantic recognition result;
and generating an inspection report based on the image displaying the region of interest and the description information of the corresponding region of interest.
4. The method as recited in claim 1, further comprising:
responsive to a user operating instruction for the intercepted image, at least one of the following controls is presented on the visual interface:
the first control is used for enabling a user to trigger the operation of label management on the intercepted image;
a second control for causing a user to trigger an operation to switch to a mode that allows the user to mark the intercepted image;
a third control for enabling a user to trigger an operation of switching the display state of the image list;
wherein the second control comprises at least one sub-control of:
a first sub-control for enabling a user to trigger an operation of adjusting the color of the mark;
a second sub-control for enabling a user to trigger an operation of adjusting the width of the mark;
and a third sub-control for enabling a user to trigger an operation for adjusting the timeliness of the mark.
5. The method of claim 1, wherein after capturing the image of the designated area of the visual interface, the method further comprises:
after the intercepted image is reduced, displaying the image in a preset area of the visual interface;
the operation instruction of the user for the intercepted image is an operation instruction triggered by the user for the shortened intercepted image in a preset time.
6. The method of claim 1, wherein the designated image further comprises an image displaying marking information;
the responding to the operation instruction of the user for the specified image in the image list further comprises the following steps:
determining the designated image as an image displaying marking information;
and after the three-dimensional model is rotated to the view angle corresponding to the designated image, displaying the three-dimensional model on the visual interface further comprises:
and displaying the marking information on the visual interface.
7. A screenshot managing apparatus, comprising:
the model screenshot module is used for responding to a screenshot instruction sent by a user and intercepting an image of a designated area in the visual interface, wherein the designated area comprises at least one part of a three-dimensional model;
the image storage module is used for binding the image with pose parameters of the three-dimensional model and storing the image into an image list; the image list comprises a plurality of images, and the images represent two-dimensional images of the three-dimensional model under one view angle;
the model rotating module is used for responding to an operation instruction of a user for the specified image in the image list, rotating the three-dimensional model to the view angle corresponding to the specified image and displaying the three-dimensional model on the visual interface; and the view angle corresponding to the designated image is determined by the pose parameters of the three-dimensional model bound by the image.
8. The apparatus of claim 7, wherein the designated image comprises a number of images displaying a region of interest; the capturing time for displaying the image of the region of interest comprises the steps of determining the time when the region of interest exists in the three-dimensional model based on a semantic recognition result of a communication record aiming at the three-dimensional model; the apparatus further comprises:
the semantic recognition module is used for determining the description information of the region of interest based on the semantic recognition result;
and the report generation module is used for generating an inspection report based on the image displaying the region of interest and the corresponding description information.
9. A computer device comprising a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor, the processor being caused by the machine-executable instructions to: performing the method of any one of claims 1-6.
10. A computer readable storage medium, characterized in that the medium has stored thereon a computer program which, when executed by a processor, implements the method of any of claims 1 to 6.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311789536.1A CN117785003A (en) | 2023-12-22 | 2023-12-22 | Screenshot management method, screenshot management device, computer equipment and medium |
PCT/CN2024/104069 WO2025011498A1 (en) | 2023-07-07 | 2024-07-05 | Method and apparatus for presenting oral health examination information, and device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311789536.1A CN117785003A (en) | 2023-12-22 | 2023-12-22 | Screenshot management method, screenshot management device, computer equipment and medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117785003A true CN117785003A (en) | 2024-03-29 |
Family
ID=90384498
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311789536.1A Pending CN117785003A (en) | 2023-07-07 | 2023-12-22 | Screenshot management method, screenshot management device, computer equipment and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117785003A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2025011498A1 (en) * | 2023-07-07 | 2025-01-16 | 先临三维科技股份有限公司 | Method and apparatus for presenting oral health examination information, and device and storage medium |
-
2023
- 2023-12-22 CN CN202311789536.1A patent/CN117785003A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2025011498A1 (en) * | 2023-07-07 | 2025-01-16 | 先临三维科技股份有限公司 | Method and apparatus for presenting oral health examination information, and device and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102014377B1 (en) | Method and apparatus for surgical action recognition based on learning | |
JP5317415B2 (en) | Image output apparatus, image output method, and image output program | |
EP3513761B1 (en) | 3d platform for aesthetic simulation | |
JP4975821B2 (en) | Hair follicle unit transplant planner and method of use thereof | |
US20130296682A1 (en) | Integrating pre-surgical and surgical images | |
JP5044237B2 (en) | Image recording apparatus, image recording method, and image recording program | |
CN111770735B (en) | Method and program for generating surgical simulation information | |
JP6440135B1 (en) | Medical information virtual reality server, medical information virtual reality program, and medical information virtual reality data creation method | |
DE202017105675U1 (en) | About natural language commands operable camera | |
JP2007289656A (en) | Image recording apparatus, image recording method and image recording program | |
CN112151155A (en) | Ultrasonic image intelligent training method and system based on artificial intelligence and application system | |
TW201237802A (en) | Content-providing system using invisible information, invisible information embedding device, recognition device, embedding method, recognition method, embedding program, and recognition program | |
CN117785003A (en) | Screenshot management method, screenshot management device, computer equipment and medium | |
JP2006350577A (en) | Operation analyzing device | |
Guyomarc’h et al. | Facial approximation of Tycho Brahe’s partial skull based on estimated data with TIVMI-AFA3D | |
JP2007289657A (en) | Image recording apparatus, image recording method, and image recording program | |
JP2020081323A (en) | Skin analysis device, skin analysis method, and computer program | |
CN109310475A (en) | System and method for automatically generating facial restoration designs and application scenarios to address observable facial deviations | |
JP2013200867A (en) | Animation creation device and camera | |
CN117393114A (en) | Display method and device of oral health examination information, computer equipment and medium | |
CN117393097A (en) | Oral cavity health examination report generation method, device and computer equipment | |
KR101940706B1 (en) | Program and method for generating surgical simulation information | |
CN109711335A (en) | The method and device that Target Photo is driven by characteristics of human body | |
CN112655014A (en) | Method for converting a view of a 3D model of a dental arch into a photo-level view | |
Johnson | Craniofacial reconstruction and its socio-ethical implications |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |