CN119440365A - Information processing method, device, electronic device, storage medium and program product - Google Patents
Information processing method, device, electronic device, storage medium and program product Download PDFInfo
- Publication number
- CN119440365A CN119440365A CN202411435054.0A CN202411435054A CN119440365A CN 119440365 A CN119440365 A CN 119440365A CN 202411435054 A CN202411435054 A CN 202411435054A CN 119440365 A CN119440365 A CN 119440365A
- Authority
- CN
- China
- Prior art keywords
- card control
- content
- user
- target
- result
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- User Interface Of Digital Computer (AREA)
Abstract
The present disclosure relates to the field of information processing technologies, and in particular, provides an information processing method, an information processing apparatus, an electronic device, a storage medium, and a program product. The method comprises the steps of responding to screenshot circle selection operation of a user on a current screen, generating a content card control corresponding to a target area circled by the user, identifying the content card control to obtain identification content, generating a result card control according to target content associated with the identification content, and generating a target combined image according to the content card control and the result card control. Thus, the complicated operation of user information processing is simplified, and the information processing efficiency is improved.
Description
Technical Field
The present disclosure relates to the field of information processing technologies, and in particular, to an information processing method, an apparatus, an electronic device, a storage medium, and a program product.
Background
In the information inquiry and arrangement scene, after the user executes the screen capturing operation, the user can only obtain the screen capturing, if the screen capturing needs to be further inquired and arranged (such as image content identification, image summarization and the like), a series of complicated operations of the user are needed, and the information processing efficiency is low.
Disclosure of Invention
An object of an embodiment of the present disclosure is to provide a method, an apparatus, an electronic device, a storage medium, and a program product for information processing, which are used to simplify complicated operations of information processing and improve information processing efficiency.
In one aspect, an embodiment of the present disclosure provides a method for processing information, where the method includes:
Responding to screenshot selecting operation of a user for a current screen, and generating a content card control corresponding to a target area selected by the user;
identifying the content card control to obtain identification content;
generating a result card control according to the target content associated with the identification content;
and generating a target combined image according to the content card control and the result card control.
In one embodiment, the screenshot operation includes a screenshot triggering operation and a content screenshot operation;
Responding to the screenshot selecting operation of the user for the current screen, generating a content card control corresponding to a target area selected by the user, comprising:
responding to a screenshot triggering operation of a user, and performing full screen capturing on a current screen to obtain a full screen image;
responding to the content circling operation of the user for the full-screen image, and obtaining a target area circled by the user;
generating a content card control associated with the target area;
above the full screen image, a content card control is displayed.
In one embodiment, in response to a content-in-one operation performed by a user on a full-screen image, a target area for user-in-one is obtained, including:
detecting a circle selection track drawn in a full-screen image by a user through an electronic pen;
And determining a target area formed by the circled track.
In one embodiment, generating a target combined image from a content card control and a result card control includes:
Capturing the content card control and the result card control respectively to obtain a content capture corresponding to the content card control and a result capture corresponding to the result card control;
and merging the content screenshot and the result screenshot to obtain a target merged image.
In one embodiment, before generating the target combined image from the content card control and the result card control, the method further comprises:
And responding to the control dragging operation of the user on the content card control or the result card control, and simultaneously dragging the content card control and the result card control to the position appointed by the user according to the dragging track of the user.
In one embodiment, before generating the target combined image from the content card control and the result card control, the method further comprises:
and responding to the editing operation aiming at the result card control, and editing the target content in the result card control to obtain a new result card control.
In one embodiment, the method further comprises:
Responding to the image dragging operation of a user, dragging the target combined image to a target application;
And executing image processing operation on the target combined image through the target application.
In one aspect, an embodiment of the present disclosure provides an apparatus for information processing, including:
The circle selection unit is used for responding to the screenshot circle selection operation of the user for the current screen and generating a content card control corresponding to the target area of the circle selection of the user;
The identification unit is used for identifying the content card control to obtain identification content;
the generating unit is used for generating a result card control according to the target content related to the identification content;
and the merging unit is used for generating a target merging image according to the content card control and the result card control.
In one embodiment, the screenshot operation includes a screenshot triggering operation and a content screenshot operation;
The circle selection unit is used for:
responding to a screenshot triggering operation of a user, and performing full screen capturing on a current screen to obtain a full screen image;
responding to the content circling operation of the user for the full-screen image, and obtaining a target area circled by the user;
generating a content card control associated with the target area;
above the full screen image, a content card control is displayed.
In one embodiment, the circling unit is configured to:
detecting a circle selection track drawn in a full-screen image by a user through an electronic pen;
And determining a target area formed by the circled track.
In one embodiment, the merging unit is configured to:
Capturing the content card control and the result card control respectively to obtain a content capture corresponding to the content card control and a result capture corresponding to the result card control;
and merging the content screenshot and the result screenshot to obtain a target merged image.
In one embodiment, the merging unit is further configured to:
And responding to the control dragging operation of the user on the content card control or the result card control, and simultaneously dragging the content card control and the result card control to the position appointed by the user according to the dragging track of the user.
In one embodiment, the merging unit is further configured to:
and responding to the editing operation aiming at the result card control, and editing the target content in the result card control to obtain a new result card control.
In one embodiment, the merging unit is further configured to:
Responding to the image dragging operation of a user, dragging the target combined image to a target application;
And executing image processing operation on the target combined image through the target application.
In one aspect, an embodiment of the present disclosure provides an electronic device, including:
Processor, and
A memory storing computer instructions for causing a processor to perform the steps of the method provided in various alternative implementations of any of the information processing described above.
In one aspect, a computer-readable storage medium is provided in an embodiment of the present disclosure, storing computer instructions for causing a computer to perform the steps of a method as provided in various alternative implementations of any one of the information processes described above.
In one aspect, a computer program product is provided in an embodiment of the disclosure, comprising computer readable code, or a non-transitory computer readable storage medium carrying computer readable code, which when run in a processor of an electronic device, performs the steps of a method as provided in various alternative implementations of any of the information processing described above.
The information processing method comprises the steps of responding to screenshot circling operation of a user on a current screen, generating a content card control corresponding to a target area circled by the user, identifying the content card control to obtain identification content, generating a result card control according to target content associated with the identification content, and generating a target combined image according to the content card control and the result card control. Therefore, when the user performs screen capturing and circle selecting operations, content identification, content inquiry and image merging processing can be automatically performed, so that complicated operations of user information processing are simplified, and information processing efficiency is improved.
Drawings
Fig. 1 is a flow chart of a method of information processing in an embodiment of the present disclosure.
Fig. 2 is an exemplary diagram of a circled trajectory in an embodiment of the present disclosure.
Fig. 3 is an exemplary diagram of a content card control in an embodiment of the present disclosure.
FIG. 4 is an exemplary diagram of a result card control in an embodiment of the disclosure.
Fig. 5 is an exemplary diagram of a target merged image in an embodiment of the present disclosure.
Fig. 6 is a detailed flowchart of a method of information processing in an embodiment of the present disclosure.
Fig. 7 is a block diagram of an apparatus for information processing in an embodiment of the present disclosure.
Fig. 8 is a schematic structural diagram of an electronic device in an embodiment of the disclosure.
Detailed Description
The following description of the embodiments of the present disclosure will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the described embodiments are some, but not all, of the embodiments of the present disclosure. All other embodiments, which can be made by one of ordinary skill in the art without inventive effort, based on the embodiments in this disclosure are intended to be within the scope of this disclosure. In addition, technical features related to different embodiments of the present disclosure described below may be combined with each other as long as they do not make a conflict with each other.
In the information inquiry and arrangement scene, after the user executes the screen capturing operation, the user can only obtain the screen capturing, if the screen capturing needs to be further inquired and arranged (such as image content identification, image summarization and the like), a series of complicated operations of the user are needed, and the information processing efficiency is low.
Based on the defects of the related art, the embodiment of the disclosure provides an information processing method, an apparatus, an electronic device, a storage medium and a program product, which aim to simplify complicated operations of information processing and improve information processing efficiency.
The embodiment of the present disclosure provides a method for processing information, which may be applied to an electronic device, where the type of the electronic device is not limited in the present disclosure, and may be any device type suitable for implementation, such as a terminal device, a server, etc., which is not described in detail in the present disclosure.
Referring to fig. 1, a flowchart of a method for processing information in an embodiment of the disclosure is shown, and the method is described below with reference to fig. 1, where a specific implementation flow of the method is as follows:
and step 101, responding to screenshot circling operation of a user for a current screen, and generating a content card control corresponding to a target area circled by the user.
The screenshot operation may include a screenshot triggering operation and a content screenshot operation.
In one embodiment, when step 101 is performed, the following steps may be employed:
and S1011, responding to the screenshot triggering operation of the user, and carrying out full screen capturing on the current screen to obtain a full screen image.
Optionally, the screenshot triggering operation may be that the user clicks a shortcut key for screenshot, receives a voice command for screenshot, receives a specified touch screen operation for screenshot (e.g., a long press screen), detects a gesture operation for screenshot, and detects a specified hardware state of the device (e.g., detects that the device is being panned by the user). For example, the screenshot triggering operation is that a user presses a screen of the electronic device through the electronic pen for a long time.
S1012, responding to the content circling operation of the user for the full-screen image, and obtaining a target area circled by the user.
In one embodiment, a circled track drawn by a user in a full-screen image through an electronic pen is detected, and a target area formed by the circled track is determined.
The circled track may be a closed loop connected end to end, which may be irregular. The target area includes a circled area of the circled track, and the target area may be the circled area or a rectangular frame including the circled area. Optionally, the target area may contain at least one of text, images, and controls.
For example, referring to FIG. 2, an exemplary plot of a circled trajectory is shown. The user performs circle selection on the area to be processed on the full-screen image through the electronic pen, and the circle selection track in fig. 2 is obtained.
As another example, it may also be applied to Virtual Reality (VR)/augmented Reality (Augmented Reality, AR) scenes where a user may scroll through information in a Virtual environment with a stylus.
And S1013, generating a content card control associated with the target area.
In one embodiment, a control is created and associated with the target area, a removable content card control is obtained. The content card control displays the content of the target area.
Further, the content card control can be dragged according to the user operation.
In one embodiment, the content card control is dragged to a user-specified location (i.e., the location where the drag trajectory is currently hovering) in response to a drag operation by the user.
S1014, displaying the content card control above the full screen image.
Referring to fig. 3, an exemplary diagram of a content card control is shown. In fig. 3, a content card control is displayed over a full screen image.
And 102, identifying the content card control to obtain identification content.
Alternatively, recognition may be by optical character recognition (Optical Character Recognition, OCR). The identification process can comprise image preprocessing, feature extraction, classification identification and the like, so that accuracy of identification content is ensured.
And 103, generating a result card control according to the target content associated with the identification content.
In one embodiment, the query is performed based on the identification content, a response result is obtained, and the response result is used as the associated target content, for example, the identification content is a mathematical problem, and then the target content can be an answer to the mathematical problem.
And the result card control displays target content. Optionally, the target content may include at least one of text, images, and controls. For example, identifying the content as a holiday soundtrack, the target content may include an image or image control representing the holiday soundtrack.
Referring to FIG. 4, an exemplary diagram of a result card control is shown. In fig. 4, a content card control and a result card control containing target content are displayed.
Further, the content card control and/or the result card control can also be dragged
In one embodiment, in response to a control dragging operation of a user on a content card control or a result card control, the content card control and the result card control are simultaneously dragged to a position designated by the user according to a dragging track of the user.
Optionally, the type of the content card control and the type of the result card control may be the same or different.
For example, the user presses any position within the content card control or the result card control long through the electronic pen and then drags to any position in the electronic screen through the electronic pen. And if the electronic equipment detects the moving track of the electronic pen, the content card control and the result card control are moved simultaneously according to the moving track.
Thus, by dragging any content card, two content cards of different types can be moved simultaneously, and accurate control of the content cards can be realized.
Further, the dragging positions of the content cards can be cached in real time, and the currently dragged content can be stored.
Further, the content in the fruit card control may also be edited. In one embodiment, in response to an editing operation for the result card control, the target content in the result card control is edited to obtain a new result card control.
Wherein the editing operation may include at least one of altering, deleting, and copying. The editing operation may be determined based on detecting a touch screen operation of the electronic pen.
In this way, a variety of operations for the resultant card control may be implemented.
And 104, generating a target combined image according to the content card control and the result card control.
In one embodiment, the content card control and the result card control are respectively subjected to screenshot to obtain a content screenshot corresponding to the content card control and a result screenshot corresponding to the result card control, and the content screenshot and the result screenshot are combined to obtain a target combined image.
Alternatively, the target merged image may be generated at a specified interval of time at or after the generation of the result card control, or may be generated in response to an image merging operation by the user.
Further, the target combined image may also be transferred to an application for image processing.
In one embodiment, a target combined image is dragged to a target application in response to an image dragging operation of a user, and an image processing operation is performed on the target combined image by the target application.
For example, the user may drag the target combined image to any target application, for example, may drag to an album application, then the target combined image may be stored in the album application, may drag to an electronic device desktop, then the target combined image may be displayed on the electronic device desktop, may drag to an input box of an application such as a communication application, then the target combined image may be used as input information of the input box, may drag to an application such as a browser, and then the target combined image may be uploaded.
Further, the transfer and processing of the target combined image can be performed through a preset shortcut operation with a specific function.
For example, the shortcut operation may be a sliding operation on the electronic screen, and different sliding directions may correspond to different image processing manners. For example, if the screen is left-slid, the screen may be dragged to the album application, and the target combined image may be stored in the album application, and if the screen is right-slid, the screen may be dragged to the electronic device desktop, and the target combined image may be displayed on the electronic device desktop.
For example, referring to fig. 5, an exemplary diagram of a target merged image is shown. In fig. 5, the target combined image is dragged into the album application by an image dragging operation.
It should be noted that fig. 5 is mainly used to illustrate that the target combined image is dragged into the album application, and in fig. 5, characters, lines and the like are not clear, so that the clarity of the specification is not affected. Similarly, in fig. 2 to 4, characters, lines, and the like are not clear, and the clarity of the description is not affected.
In practical application, the shortcut operation can be set according to the practical application scene, and is not limited herein.
The above-described embodiment is further exemplified below with reference to fig. 6. Referring to fig. 6, a detailed flowchart of a method for processing information is shown, and the flow of the method includes:
And 601, responding to a screenshot triggering operation of a user, and carrying out full screen capturing on a current screen to obtain a full screen image.
Step 602, responding to content circling operation of a user on a full-screen image, and determining a circled target area.
In one embodiment, if the selected target area is not available, an error notification may be displayed and step 602 may be performed.
And 603, generating a content card control corresponding to the target area.
Step 604, identifying the content card control, obtaining the identification content, and determining the first area position of the content card control.
The first area location may be coordinates of four corner points (i.e., rectangular corner points) of the content card control.
Step 605, generating a result card control according to the target content associated with the identification content.
Further, if the target content is empty, a prompt message for re-circling is displayed, and step 602 is performed.
Step 606, determining a second area position of the result card control according to the first area position.
In one embodiment, the result card control and the content card control are not overlapped, and the result card control is generally located in a rectangular area below the first area, where the size of the rectangular area is the size of the result card control, and if the area below the first area is smaller than the rectangular area, the second area may be selected from the designated area (for example, the area above the first area).
And 607, displaying the result card control at the second area position.
And 608, in response to the control dragging operation of the user, dragging the content card control and the result card control to the position designated by the user.
Step 609, generating and storing a target combined image according to the content card control and the result card control.
In one embodiment, the content card control and the result card control are intercepted through the region intercepting frame respectively to obtain a content screenshot and a result screenshot, and the content screenshot and the result screenshot are combined to obtain the target combined image.
Step 610, in response to the image dragging operation of the user, dragging the target combined image to the target application.
Further, the stored position of the target combined image may also be updated in real time.
Various scenarios to which embodiments of the present disclosure apply are illustrated below.
One application scenario may be a learning scenario. In the education field, the method can be applied to intelligent learning equipment such as learning machines and electronic education tablets, for example, students can select content to be learned on the equipment through handwriting pens, then the electronic equipment can recognize characters in a selected area through OCR technology and generate and display result cards (namely result card controls) associated with the characters, and the students can drag the result cards to any positions, so that the students can view and learn conveniently. Thus, not only the learning efficiency is improved, but also the interest of learning is enhanced.
An application scene can be an intelligent office scene, and can be applied to electronic equipment such as an electronic notebook, an intelligent whiteboard and the like in the intelligent office field. The staff can select important conference contents or work tasks through handwriting pen, then the electronic equipment performs content recognition through OCR technology and generates result cards so as to facilitate subsequent consulting and sorting. And the result card can be dragged to any position, so that the working efficiency is improved.
An application scenario may be a teleconference scenario, in which case it may be applied to a video conferencing device. The participants can select important information on the screen through handwriting pens, and then the electronic equipment can recognize through OCR technology and generate a result card so as to facilitate subsequent consulting and sharing. By the method, the conference efficiency is improved, and remote collaboration is facilitated.
An application scene may be a virtual reality and augmented reality scene, which may be applied to VR/AR devices in the fields of virtual reality and augmented reality. The user can select information in the virtual environment through the handwriting pen, then the VR/AR equipment can recognize and generate a result card through the OCR technology, and the dragged content area and the result card can be dragged to any position, so that richer interaction modes are provided, and user experience is improved.
In the related art, long-press dragging of a single control can only be supported, for example, only one control such as a picture or text can be dragged, and simultaneous dragging of multiple types of controls cannot be realized. In practical applications, such as education and intelligent offices, the requirements of users for diversified information processing are generally not met. In the embodiment of the disclosure, any card control can be pressed for a long time, and two card controls can be dragged to any positions at the same time, so that convenience is provided for user information processing.
Moreover, in the related art, the screen capturing is usually performed in a full screen manner, but this often cannot be accurately matched with the content of the area selected by the user, and the situation that the recognition result is inconsistent with the content selected by the user may occur. For example, in the education field, when students select contents to be learned on electronic devices, if OCR recognition results do not match the selected contents, learning effects of the students are affected. In the embodiment of the disclosure, the target area to be identified can be determined through the circle selection of the user, so that the target area can be accurately identified.
Further, under the related art, only the screenshot can be displayed after the screenshot instruction of the user, and only the identification content can be displayed after the identification instruction of the user, without generating the result card control of the associated target content, which cannot meet the requirements of the user on information inquiry and arrangement in the fields of intelligent office, teleconference and the like. In the embodiment of the disclosure, after the user performs the circle selection operation, the content card control and the corresponding result card control thereof can be automatically selected, identified and generated, and the content card control and the corresponding result card control thereof can be subjected to the picture synthesis in real time to obtain the target synthesized picture, and the target synthesized picture can be dragged and transferred to any place, so that the complex operation of the user is simplified, convenience is provided for the user, the requirements of the user on information inquiry and arrangement can be met, and the efficiency and convenience of information processing are improved.
Moreover, in the embodiment of the disclosure, interaction can be performed through clicking, dragging, control showing and other modes, so that the method and the device can be applied to the fields of virtual reality and augmented reality, the application range and the richness of interaction modes are improved, the content card control and the result card control can be dragged to any position, the points focused by a user can not be covered during interaction, the vision is not influenced, for example, the current interaction of the user can not be influenced during education scenes, a more flexible operation mode is provided, the interaction experience of the user is enhanced, and the dragged position can be cached in real time to store the current dragged content to synthesize the output result.
Based on the same inventive concept, the embodiments of the present disclosure further provide an information processing apparatus, and because the principle of solving the problem by using the apparatus and the device is similar to that of an information processing method, the implementation of the apparatus may refer to the implementation of the method, and the repetition is omitted. The device may be applied to an electronic apparatus, and the type of the electronic apparatus is not limited in this disclosure, and may be any apparatus type suitable for implementation, for example, a terminal apparatus, a server, etc., which will not be described in detail in this disclosure. The apparatus embodiments may be implemented by software, or may be implemented by hardware or a combination of hardware and software. Taking software implementation as an example, the device in a logic sense is formed by reading corresponding computer program instructions in a nonvolatile memory into a memory by a processor of an electronic device where the device is located for operation.
Referring to fig. 7, a block diagram of an apparatus for processing information in an embodiment of the disclosure is shown. In some embodiments, an apparatus for information processing of examples of the present disclosure includes:
The circle selection unit 701 is configured to generate a content card control corresponding to a target area of the circle selection by the user in response to a screenshot circle selection operation of the user for the current screen;
the identifying unit 702 is configured to identify the content card control, and obtain identified content;
A generating unit 703, configured to generate a result card control according to the target content associated with the identified content;
And the merging unit 704 is used for generating a target merged image according to the content card control and the result card control.
In one embodiment, the screenshot operation includes a screenshot triggering operation and a content screenshot operation;
The circling unit 701 is used for:
responding to a screenshot triggering operation of a user, and performing full screen capturing on a current screen to obtain a full screen image;
responding to the content circling operation of the user for the full-screen image, and obtaining a target area circled by the user;
generating a content card control associated with the target area;
above the full screen image, a content card control is displayed.
In one embodiment, the circling unit 701 is configured to:
detecting a circle selection track drawn in a full-screen image by a user through an electronic pen;
And determining a target area formed by the circled track.
In one embodiment, the merging unit 704 is configured to:
Capturing the content card control and the result card control respectively to obtain a content capture corresponding to the content card control and a result capture corresponding to the result card control;
and merging the content screenshot and the result screenshot to obtain a target merged image.
In one embodiment, the merging unit 704 is further configured to:
And responding to the control dragging operation of the user on the content card control or the result card control, and simultaneously dragging the content card control and the result card control to the position appointed by the user according to the dragging track of the user.
In one embodiment, the merging unit 704 is further configured to:
and responding to the editing operation aiming at the result card control, and editing the target content in the result card control to obtain a new result card control.
In one embodiment, the merging unit 704 is further configured to:
Responding to the image dragging operation of a user, dragging the target combined image to a target application;
And executing image processing operation on the target combined image through the target application.
The information processing method comprises the steps of responding to screenshot circling operation of a user on a current screen, generating a content card control corresponding to a target area circled by the user, identifying the content card control to obtain identification content, generating a result card control according to target content associated with the identification content, and generating a target combined image according to the content card control and the result card control. Therefore, when the user performs screen capturing and circle selecting operations, content identification, content inquiry and image merging processing can be automatically performed, so that complicated operations of user information processing are simplified, and information processing efficiency is improved.
In an embodiment of the present disclosure, there is also provided an electronic device including:
Processor, and
And a memory storing computer instructions for causing the processor to perform the method of any of the embodiments described above.
In an embodiment of the disclosure, a computer readable storage medium is provided, where computer instructions are stored, where the computer instructions are configured to cause a computer to perform a method of any of the foregoing embodiments.
Embodiments of the present disclosure also provide a computer program product comprising computer readable code, or a non-transitory computer readable storage medium carrying computer readable code, which when executed in a processor of an electronic device, causes the processor of the electronic device to implement the method of any of the above embodiments.
Fig. 8 shows a schematic structural diagram of an electronic device 8000. Referring to fig. 8, the electronic device 8000 includes a processor 8010 and a memory 8020, and may optionally include a power supply 8030, a display unit 8040, and an input unit 8050.
The processor 8010 is a control center of the electronic device 8000, connects various components using various interfaces and wires, and performs various functions of the electronic device 8000 by running or executing software programs and/or data stored in the memory 8020, thereby monitoring the electronic device 8000 as a whole.
In the disclosed embodiment, the processor 8010 executes the steps in the above embodiments when it invokes a computer program stored in the memory 8020.
Optionally, the processor 8010 may include one or more processing units, and preferably the processor 8010 may integrate an application processor and a modem processor, wherein the application processor primarily processes operating systems, user interfaces, applications, etc., and the modem processor primarily processes wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 8010. In some embodiments, the processor, memory, may be implemented on a single chip, and in some embodiments, they may be implemented separately on separate chips.
The memory 8020 may mainly include a storage program area that may store an operating system, various applications, and the like, and a storage data area that may store data created according to the use of the electronic device 8000, and the like. In addition, the memory 8020 can include high-speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device, and the like.
The electronic device 8000 also includes a power supply 8030 (e.g., a battery) that provides power to the various components, which may be logically coupled to the processor 8010 via a power management system, such that the power management system may be used to manage charging, discharging, and power consumption.
The display unit 8040 may be used to display information input by a user or information provided to the user, various menus of the electronic device 8000, and the like, and is mainly used to display a display interface of each application in the electronic device 8000 and objects such as text and pictures displayed in the display interface in the embodiment of the present disclosure. The display unit 8040 may include a display panel 8041. The display panel 8041 may be configured in the form of a Liquid crystal display (Liquid CRYSTAL DISPLAY, LCD), an Organic Light-Emitting Diode (OLED), or the like.
The input unit 8050 may be used to receive information such as numbers or characters input by a user. The input unit 8050 may include a touch panel 8051 and other input devices 8052. Among other things, the touch panel 8051, also referred to as a touch screen, may collect touch operations thereon or thereabout by a user (e.g., operations of the user on the touch panel 8051 or thereabout using any suitable object or accessory such as a finger, stylus, etc.).
Specifically, the touch panel 8051 may detect a touch operation by a user, detect signals resulting from the touch operation, convert the signals into coordinates of contacts, send the coordinates of contacts to the processor 8010, and receive and execute a command sent from the processor 8010. In addition, the touch panel 8051 may be implemented in various types such as resistive, capacitive, infrared, and surface acoustic wave. Other input devices 8052 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, on-off keys, etc.), a trackball, mouse, joystick, etc.
Of course, the touch panel 8051 may cover the display panel 8041, and when the touch panel 8051 detects a touch operation thereon or thereabout, the touch panel is transmitted to the processor 8010 to determine the type of touch event, and then the processor 8010 provides a corresponding visual output on the display panel 8041 according to the type of touch event. Although in fig. 8, the touch panel 8051 and the display panel 8041 are two separate components to implement the input and output functions of the electronic device 8000, in some embodiments, the touch panel 8051 may be integrated with the display panel 8041 to implement the input and output functions of the electronic device 8000.
The electronic device 8000 may also include one or more sensors, such as a pressure sensor, a gravitational acceleration sensor, a proximity light sensor, and the like. Of course, the electronic device 8000 may also include other components such as cameras, as desired in a particular application, which are not shown in fig. 8 and will not be described in detail since these components are not the components that are important in embodiments of the present disclosure.
It will be appreciated by those skilled in the art that fig. 8 is merely an example of an electronic device and is not meant to be limiting and that more or fewer components than shown may be included or certain components may be combined or different components.
For convenience of description, the above parts are described as being functionally divided into modules (or units) respectively. Of course, the functions of each module (or unit) may be implemented in the same piece or pieces of software or hardware when implementing the present disclosure.
Claims (11)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411435054.0A CN119440365A (en) | 2024-10-14 | 2024-10-14 | Information processing method, device, electronic device, storage medium and program product |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411435054.0A CN119440365A (en) | 2024-10-14 | 2024-10-14 | Information processing method, device, electronic device, storage medium and program product |
Publications (1)
Publication Number | Publication Date |
---|---|
CN119440365A true CN119440365A (en) | 2025-02-14 |
Family
ID=94517121
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202411435054.0A Pending CN119440365A (en) | 2024-10-14 | 2024-10-14 | Information processing method, device, electronic device, storage medium and program product |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN119440365A (en) |
-
2024
- 2024-10-14 CN CN202411435054.0A patent/CN119440365A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10228848B2 (en) | Gesture controlled adaptive projected information handling system input and output devices | |
US20130198653A1 (en) | Method of displaying input during a collaboration session and interactive board employing same | |
US9348420B2 (en) | Adaptive projected information handling system output devices | |
US12093506B2 (en) | Systems and methods for a touchscreen user interface for a collaborative editing tool | |
EP3183640B1 (en) | Device and method of providing handwritten content in the same | |
US20150268773A1 (en) | Projected Information Handling System Input Interface with Dynamic Adjustment | |
US9965038B2 (en) | Context adaptable projected information handling system input environment | |
JP2003303047A (en) | Image input and display system, usage of user interface as well as product including computer usable medium | |
EP2965181B1 (en) | Enhanced canvas environments | |
US20150268739A1 (en) | Projected Information Handling System Input Environment with Object Initiated Responses | |
CN111580903A (en) | Real-time voting method, device, terminal equipment and storage medium | |
US10133355B2 (en) | Interactive projected information handling system support input and output devices | |
CN115437736A (en) | Method and device for recording notes | |
US20240370163A1 (en) | Annotation method, device, interactive white board and storage medium | |
WO2025016426A1 (en) | Screenshot method and apparatus, electronic device, and readable storage medium | |
CN116610243A (en) | Display control method, display control device, electronic equipment and storage medium | |
Liwicki et al. | Touch & write: a multi-touch table with pen-input | |
CN119440365A (en) | Information processing method, device, electronic device, storage medium and program product | |
US20240192911A1 (en) | Systems and methods for managing digital notes for collaboration | |
CN117555449A (en) | Information display method and device and electronic equipment | |
CN119225532A (en) | Display device control method and device based on gesture, storage medium, and device | |
CN118277602A (en) | List display method and device | |
CN118349149A (en) | Image display method and device and electronic equipment | |
CN115470185A (en) | File naming method and device, electronic equipment and storage medium | |
CN116225272A (en) | Display method and device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |