Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
First embodiment
It should be noted that, according to the embodiment of the present application, the management efficiency of the recipe parameters and the control values in the semiconductor manufacturing is improved through the data standardization and the enhanced sharing mechanism. The system supports offline and online sharing of recipe parameters through a unified data structure and format, other systems (such as FDC and APC) can inquire the recipe information in a standard format, so that parameter consistency among different systems is ensured, repeated labor of engineers in recipe management is obviously reduced, and production efficiency and stability of process quality are improved.
Fig. 1 is a flow chart of a method for managing data of semiconductor process recipe parameters according to an embodiment of the present application.
Specifically, as shown in fig. 1, a first embodiment of the present application relates to a data management method for semiconductor process recipe parameters, which includes the following steps:
Step S101, data analysis is performed on the acquired process recipe with one or more data formats to convert the process recipe into the target data format, and the target data format is stored in a database (as shown in fig. 2, 3 or 4).
It is understood that in the field of semiconductor manufacturing, process recipe parameters refer to specific parameter settings used to control and optimize various process steps during semiconductor manufacturing. These parameters may include, but are not limited to, physical quantities such as temperature, pressure, gas flow, voltage, current, and process control parameters such as time, sequence of steps, and the like. Process recipe parameters are critical to ensure quality, yield, and consistency of semiconductor products. The data management method of the application relates to systematic management and optimization of these process recipe parameters.
Specifically, the method comprises the steps of data acquisition, data analysis, data conversion, data storage and the like. Wherein, the data collection refers to the acquisition of raw process recipe parameter data from a semiconductor manufacturing facility or related system. Such data may exist in a variety of formats, such as, but not limited to, text files, spreadsheets, database records, or device output in a proprietary format, among others. The data analysis refers to analyzing and processing the collected original data to extract meaningful information. This may involve operations such as identifying data structures, separating parameter names and values, interpretation units and scopes, and so forth. The purpose of data parsing is to convert unstructured or semi-structured raw data into structured information that can be further processed. Data conversion refers to converting parsed data into a predefined target data format. The purpose of this step is to achieve standardization of data so that process recipe parameters from different sources, in different formats, can be uniformly managed and used. The target data format may be a generic data exchange format, such as JSON or XML, or a custom format specifically designed for semiconductor process parameter management. Data storage refers to storing the converted standard format data in a database. This database may be a relational database, such as MySQL or Oracle, or a non-relational database, such as mongo db or Cassandra, with the specific choice depending on the structure of the data, the query requirements, and the system performance requirements.
Step S102, responding to the data request transmitted through the data interface, and returning the corresponding process recipe data in the database through the data interface in the format of the data request based on the configured mapping relation. The data interface here may be in various forms of API (application programming interface), web service, message queue, etc. The data requests may come from various clients including, but not limited to, manufacturing Execution Systems (MES), equipment automation systems, data analysis tools, and the like. In response, the method converts the standard format process recipe data stored in the database into a specific format required by the data requester based on a pre-configured mapping relationship. This dynamic format conversion mechanism enables the system to flexibly accommodate the data requirements of different systems without changing the core data storage structure.
Specifically, for the received hierarchical data structure (as shown in fig. 5), in an embodiment, fig. 2 is a flow chart of a data normalization method according to an embodiment of the present application. As shown in fig. 2, a data normalization method includes the steps of:
Step 201, receiving a process recipe including one or more process step information, wherein each process step includes a control code and a plurality of parameters. Control codes are understood herein to be unique identifiers that identify and distinguish between different process steps, while parameters are specific settings and conditions, such as temperature, pressure, time, gas flow, etc., corresponding to each process step.
Step S202, determining the meaning of each parameter under each control code according to a predefined equipment manual. Upon receipt of a process recipe containing process step information, the method determines the specific meaning of the individual parameters under each control code according to a predefined equipment manual. The equipment manual may be a document or a data table defined in advance by an equipment manufacturer or a process engineer, and information such as a parameter type, a value range, and a physical meaning corresponding to each control code is specified in detail. By consulting and analyzing this equipment manual, the role and meaning of each parameter in a particular process step can be ascertained.
It should be noted that in an actual semiconductor manufacturing process, different equipment models and different process nodes may use different control coding systems and parameter definition manners. Thus, predefined equipment manuals may require customization and maintenance in practical applications based on specific equipment models and process characteristics. At the same time, the content of the equipment manual may also change with process upgrades and optimizations.
In step S203, each parameter is converted into a standardized format for storage in a database. After the meaning of the individual parameters is clarified, the method further converts the parameters into a standardized format for subsequent storage and processing (as shown in fig. 4). The standardized format may be designed and defined according to the actual requirements and system environment, for example, all parameter values may be uniformly converted into specific data types (such as floating point numbers, integers, etc.), or parameter values of different units may be uniformly converted into consistent units. The standardization aims at eliminating differences and incompatibilities among different equipment and different processes and improving the consistency and comparability of data.
For a received data stream encoded data structure (as shown in fig. 6), fig. 3 is a flow chart of another data normalization method according to an embodiment of the present application. As shown in fig. 3, another data normalization method includes the steps of:
in step S301, a binary data stream containing a process recipe is received. It should be noted that the term "process recipe" as used herein refers to a set of parameters and instructions for guiding the production operation in the industrial production process. Process recipes typically include, but are not limited to, raw material proportions, production process parameters, quality control indicators, and the like. The information has important significance for ensuring the quality of products and improving the production efficiency.
Further, the "binary data stream" as referred to in the present invention refers to a data sequence encoded in a binary format. Binary data stream is a common data transmission and storage format in computer systems, and has the characteristics of high data density and high transmission efficiency. In implementations of the invention, the process recipe information is encoded as a binary data stream to facilitate the transmission and processing of data.
Step S302, parsing the binary data stream into a plurality of data segments according to a predefined parsing rule. The parsing rules herein refer to a set of rules preset to identify and segment different parts of the data stream. The parsing rules may be based on structural features of the data stream, such as fixed length fields, specific separators, or specific data patterns, etc. The parsing process may use a variety of algorithms and techniques, such as regular expression matching, state machine parsing, or custom parsing algorithms, etc. The purpose of parsing is to split the original binary data stream into meaningful data segments, each representing a particular parameter or information.
The binary data stream is analyzed into a plurality of data segments according to a predefined analysis rule, wherein the binary data stream is preprocessed according to a preset byte identifier, the types of the data segments are defined, a segment relation diagram is built after the preset bytes are read to extract file header information, the data stream is divided into a plurality of data blocks, overlapping areas are reserved among the blocks, and the data segments are organized according to a unified format after segment validity is confirmed through feature library matching.
The method comprises the steps of preprocessing a binary data stream according to preset byte identifiers, defining a data segment type, reading preset bytes to extract file header information, and then establishing an inter-segment relation diagram, wherein the binary data stream is preprocessed, a specific byte mode is identified to serve as a paragraph identifier, a first preset byte is used as a starting identifier, a second preset byte is used as an ending identifier, three data segment types are defined based on a state transition method, the three data segment types comprise a parameter name segment beginning with a third preset byte, a parameter value segment beginning with a fourth preset byte and a control segment beginning with a fifth preset byte, the preset bytes of the data stream are read to be initially analyzed to extract file header information, the file header comprises a data version number, a time stamp and segment number information, the inter-segment relation diagram is established, an adjacent matrix representation is used, and the association strength is calculated through a reference association degree, a distance attenuation factor and a segment length.
Specifically, the parsing rule may employ a state transition based segmentation parsing method. The method first preprocesses a binary data stream by identifying a specific byte pattern as a paragraph identifier. Specifically, 0x55 is used as a start flag, and 0x65 is used as an end flag. The parsing rules define three main data segment types, parameter name segment, parameter value segment and control segment. The parameter name field starts with 0x73, the parameter value field starts with 0x72, and the control field starts with 0x 6D.
The length of each data segment is determined by means of dynamic length coding. In a specific implementation, a variable length coding algorithm is used, wherein the first 4 bytes are used to store segment length information. The length calculation formula is: Wherein, the method comprises the steps of, wherein, For the actual length of the data segmentTo the point ofFor a 4 byte value in the length field,Representing a left shift operation.
In order to improve the analysis efficiency, the embodiment of the application introduces a sliding window mechanism, and the window size W is dynamically adjusted according to the data characteristics. The adjustment of the window size follows the following rules: Wherein, the method comprises the steps of, wherein, For the reference window size, the value is 128 bytes,For adjusting the coefficient, the value is 0.15,Is the standard deviation of the last 100 data segment lengths,For the maximum window limit, 8192 bytes are set.
In the data segment boundary identification process, a modified Boyer-Moore algorithm is adopted for pattern matching. The algorithm accelerates the matching process by building a bad character table and a good suffix table. For special cases, such as when a possible overlap of data segments is encountered, the system may employ a forward validation approach to processing. The specific operation is that the data characteristics of 32 bytes before and after the suspicious boundary are checked, and the entropy value is calculatedWhereinIs the probability of occurrence of the byte value i within this range. This boundary is considered valid when the E value is between 2.5 and 6.7. The method can effectively avoid misjudgment and improve the resolution accuracy.
To ensure the stability of the parsing process, checksum verification is also required for each potential data segment. Checksum calculation employs a modified Fletcher algorithm: Wherein Is the i-th byte value in the data segment. The data segment is considered valid only if the calculated checksum matches the stored checksum value at the end of the data segment.
In an actual data segment parsing implementation, the system first loads a predefined segment type mapping table that defines specific structural features of different types of data segments. The mapping table is stored in a local configuration file in the form of key-value pairs, wherein the key is a segment type identifier (1 byte), and the value is detailed description information of the segment of the type, including the expected length range, the internal structure, and the like.
The analysis process of the data segment comprises the following specific steps that firstly, the system reads the first 32 bytes of the data stream for initial analysis, and the file header information is extracted. The file header contains basic information such as a data version number, a time stamp, and the number of segments. The version number is used to determine the resolution strategy used, and the currently supported version number ranges from 1.2.3 to 2.1.0. Next, the system builds an inter-segment relationship graph (Segment Relationship Graph, SRG). The figure is represented using a adjacency matrix, the matrix element aij representing the strength of association between the i-th and j-th segments. The correlation strength calculation formula is: Wherein, the method comprises the steps of, wherein, The reference association degree (value of 0.8),For the distance of the two segments in the data stream,For the distance decay factor (value 256),AndRespectively two segments in length.
Dividing the data stream into a plurality of data blocks, reserving overlapping areas among the blocks, and organizing the data segments according to a unified format after confirming segment validity through feature library matching, wherein the method comprises the steps of dividing the data stream into a plurality of data blocks with similar sizes, searching segment boundary identifiers for each data block and constructing segment position indexes; the method comprises the steps of reserving an overlapping area between adjacent data blocks, determining attribution of data segments in the overlapping area through calculating local context similarity, maintaining a dynamically updated segment feature library, calculating feature matching degree through feature weights and similarity scores, confirming validity of the segments when the feature matching degree exceeds a preset threshold, and organizing the analyzed data segments according to a unified internal format, wherein the analyzed data segments comprise segment type identification, segment length information, segment attribute marks, data loads and checksums.
In particular, the system may employ improved divide and conquer strategies when performing specific data segment extraction. The data stream is first divided into a plurality of blocks of similar size, the block size being automatically adjusted according to the system memory constraints, typically set to 4096 bytes. For each block, searching segment boundary identifiers within the block to construct a preliminary segment position index. For each possible segment boundary position, statistics of the front and back data are calculated, including byte frequency distribution, repetition pattern, etc. And optimizing the segment boundary position by using a dynamic programming algorithm, and minimizing the segment error probability. To handle the situation where a data segment may span a block, the system reserves a 64 byte overlap area from block to block. For data segments in the overlapping region, their attribution is determined by calculating local context similarity. The similarity calculation considers the continuity and semantic consistency of the byte sequence.
During the parsing process, the system maintains a dynamically updated segment feature library for optimizing the subsequent parsing process. The feature library contains statistical information of the identified segments, such as length distribution, content features, etc. Calculating through the feature matching degree: Wherein For feature weights (determined experimentally, e.g., length feature weight 0.4, content feature weight 0.6),Is a similarity score for the corresponding feature. When (when)When the value exceeds the threshold value of 0.85, the validity of the segment can be quickly confirmed.
For the handling of abnormal situations, the system may run a multi-level fault tolerance mechanism that when a data segment checksum mismatch is detected, attempts to correct when the segment length exceeds the expected range using a spare CRC32, based on the characteristics of adjacent segments, marks the segment as an object to be analyzed when an unknown segment type identification occurs, and records context information for subsequent analysis.
The parsed data segments are organized in a unified internal format, including segment type identification (1 byte), segment length information (4 bytes), segment attribute flags (1 byte), data payload (variable length), checksum (2 bytes). This organization ensures data integrity and traceability while providing clear processing boundaries for subsequent decoding processes.
Step S303, decoding each data segment to obtain a corresponding parameter name and parameter value, wherein the decoding process comprises converting ASCII codes in the target data segment into character strings to obtain the parameter name, and converting other data segments into unsigned integers to obtain the parameter value. In a preferred embodiment of the invention, the decoding process for the data segment comprises two main steps, firstly converting the ASCII code in the target data segment into a character string to obtain a parameter name, and secondly converting the other data segments into unsigned integers to obtain parameter values. The decoding mode fully utilizes the characteristic of ASCII coding, so that the parameter name can be presented in a human readable form, and meanwhile, the accuracy and consistency of the numerical value are ensured by converting the parameter value into an unsigned integer.
It should be noted that the decoding process may involve conversion of multiple data types, not limited to ASCII codes and unsigned integers. For example, certain parameters may need to be decoded as floating point numbers, signed integers, or other complex data types. Therefore, the decoding process of the present invention can be extended and customized according to the actual requirements to support more data types and formats.
Step S304, the decoded data stream is converted into a standardized format for storage in a database. The standardized format refers to a unified, canonical data representation that facilitates data exchange and processing between different systems. The choice of standardized format may be determined according to the specific application scenario and system requirements, for example, JSON, XML, CSV may be adopted as a common data exchange format, or a custom structured data format may be used (as shown in fig. 4). The process of converting data into a standardized format may involve operations such as reorganization of data structures, conversion of data types, addition of metadata, and so forth. The step of storing into the database is to persist the normalized data for subsequent querying, analysis, and use. The database may be a relational database (e.g., mySQL, oracle, etc.), a non-relational database (e.g., mongoDB, cassandra, etc.), or other type of data storage system. The selection of the appropriate database type and structure is critical for efficient management and fast retrieval of data.
Fig. 4 is a flow chart of another data normalization method according to an embodiment of the present application. As shown in fig. 4, the standardized format includes storing parameters of each process step as separate data entries, each data entry including a parameter name, a parameter value, a lower limit value, and an upper limit value field, and configuring the parameter value as a step identifier and leaving the corresponding lower limit value and upper limit value fields in the database empty when the parameter name is a step.
Fig. 5 is a schematic diagram of a hierarchical data structure according to an embodiment of the present application.
As shown in fig. 5, 1001 is a control code (CCode), and in the manual of the device, what the parameters under each CCode mean are described, and under 1001 there are 4 parameters, and the parameter names of the four parameters are StepName (StepName), 20 (Time), 30 (Temperature), 40 (Pressure) as follows. The RMS will parse the content into the following format for storage in a database, as shown in table 1:
table 1 parses hierarchical data and stores the parsed hierarchical data in a database
Fig. 6 is a schematic diagram of a data structure of a data stream code according to an embodiment of the present application, where binary data needs to be parsed, and parsing rules are described in a manual of an apparatus, and the following is parsing of the simple Demo, where a first portion is 16 bytes, excluding the first 00 bytes, the second eight bits are converted into a string according to ASCII codes and then are usernames, a second portion is 4 bytes of unqualified Int data, converted into numbers and then become 300, and a third portion is also 4 bytes of unqualified Int data, converted into numbers and then become 200. The actual Body will be very long and the parsing rules are more complex. The RMS will parse the data into the following format to be placed in the database as shown in table 2:
table 2 data formats stored in the database after parsing the data stream encoded stream
Fig. 7 is a flowchart of a method for responding to a data request according to an embodiment of the present application. As shown in fig. 7, a response data request method includes the steps of:
Step S701, a mapping relation is established and stored, wherein the mapping relation is used for mapping the recipe name and the parameter name of the target system to the corresponding names in the database. A Mapping (Mapping) relationship may have differences in recipe names and parameter names between different systems, and the systems integrate a conversion configuration function. The function can automatically convert the recipe names and parameter names in the FDC system into corresponding names and parameters in the RMS system, so that naming differences among different systems are eliminated, and consistency and accuracy of recipe data are ensured.
Step S702, a data request incoming from a target system through a data interface is received, the data request including a system name, a recipe list, the recipe list including a tool identifier, a recipe name, and a parameter list. For example, for an FDC system, a request is made through a specified data interface, the request data structure is as follows:
1. SYSTEMNAME (request system name)
2. RECIPELIST (request Recipe list)
2.1, ToolId (apparatus to which RECIPE belongs)
2.2, RECIPENAME (Recipe name)
2.3, Params (String array, which parameters need to be queried)
Step S703, converting the recipe name and the parameter name in the data request into corresponding recipe names and parameter names in the database according to the configured mapping relationship.
Step S704, based on the converted names, corresponding recipe parameter information and parameter control ranges are queried in a database.
Specifically, the RMS performs data Mapping, namely, converts the Recipe name and the parameter name in the FDC request into corresponding Recipe names and parameter names in the RMS system according to Mapping configuration, and queries Recipe parameter information and control range data.
Step S705, generating response data, where the response data includes a system name and a recipe list, and the recipe list includes a tool identifier, a recipe name, and parameter information, and the parameter information includes a parameter name, a parameter setting value, a parameter control lower limit value, and a parameter control upper limit value. At step S606, the response data is returned to the target system through the data interface.
Taking the above example, the RMS returns data to the standard data structure of the FDC system, and the return data structure is as follows:
1. SYSTEMNAME (request system name)
2. RECIPELIST (request Recipe list)
2.1, ToolId (apparatus to which RECIPE belongs)
2.2, RECIPENAME (Recipe name)
2.3、Params
2.3.1, Name (parameter Name)
2.3.2 Value (parameter set point)
2.3.3 LSL (parameter under control limit)
2.3.4 USL (parameter under control limit)
Fig. 8 is an exemplary schematic diagram of a request data structure in a data return format according to an embodiment of the present application, and fig. 9 is an exemplary schematic diagram of a reply data structure in a data return format according to an embodiment of the present application. The final return form described in the above embodiments may be effected as shown in figures 8 and 9.
Second embodiment
Fig. 10 is a schematic block diagram of a data management system for semiconductor process recipe parameters according to an embodiment of the present application. It is to be understood that the system shown in the drawings is intended to be illustrative and not restrictive. This means that the system architecture involved is not limited to a particular form or design, but is presented as an example. In other words, the architecture shown in the figures may be regarded as an expression to clearly describe related concepts and relationships, and not to exclude other forms of architecture. Thus, in explaining the architecture in the figures, it should be understood that the model has flexibility and versatility, and is intended to provide an exemplary description rather than a restrictive provision for a particular form.
Specifically, as shown in fig. 10, a data management system for performing the above data management method of a semiconductor process recipe parameter includes a recipe acquisition unit 1001 and a data management unit 1002, wherein the recipe acquisition unit 1001 is configured to acquire data containing the semiconductor process recipe in one or more formats from one or more devices, the data management unit 1002 is configured to parse the acquired process recipe in one or more data formats to convert the process recipe into a target data format, store the target data format in a database, and respond to a data request transmitted through a data interface, and return corresponding process recipe data in the database in the format of the data request through the data interface based on a configured mapping relationship.
In summary, the embodiment of the application supports offline and online sharing of recipe parameters through a unified data structure and format. It should be understood that offline sharing herein refers to data exchange and use in a non-automated production environment, such as engineers calling relevant parameters from a Recipe Management System (RMS) to pre-populate when configuring Fault Detection and Classification (FDC) parameters. The offline sharing mode is independent of real-time network connection, and can exchange data through file transmission or other non-real-time modes. On-line sharing refers to the situation where the system automatically checks and uses recipe parameters during automated production. For example, during the automated production of processed wafers, the control program may access the RMS in real time, automatically check whether the currently used recipe meets specification requirements, and update in real time as needed. This online sharing mechanism ensures that the recipe parameters used in the production process are always up-to-date and in compliance. This flexible sharing mechanism of the present embodiment has the advantage that, first, it accommodates different job scenario requirements. In the off-line environment, engineers can conveniently acquire and pre-fill parameters, the configuration efficiency is improved, and in the on-line production environment, the system can automatically check and update the parameters, so that the stability and consistency of the production process are ensured. Second, the unified data structure and format simplifies the system integration difficulty, so that RMS can seamlessly interface with various upstream and downstream systems, such as FDC systems, manufacturing Execution Systems (MES), etc. Again, this mechanism improves data consistency and traceability. Whether configured offline or used online, all parameter changes can be recorded and managed in the RMS for subsequent auditing and analysis.
Further, the system enables other systems (e.g., fault detection and classification system FDC, advanced process control system APC) to query the recipe information in a standard format. The interoperability ensures the parameter consistency among different systems and avoids the production problem caused by data inconsistency. For example, the FDC system may use standardized recipe parameters to set the monitor threshold, and the APC system may make real-time process adjustments based on these parameters.
The method of the embodiment of the application significantly reduces the repeated labor of engineers in recipe management. Traditionally, engineers may need to manually input recipe parameters from one system to another, or to switch between different formats. This is not only time consuming, but also prone to errors. Through an automated data management process, engineers can focus more time and effort on process optimization and problem solving, thereby improving production efficiency.
In addition, the method also helps to improve the stability of the process quality. By ensuring that all relevant systems use consistent, up-to-date recipe parameters, process fluctuations due to parameter inconsistencies or outdated are reduced. This is particularly important in the industry where semiconductor manufacturing is such that extremely high precision and consistency requirements are imposed.
It is to be noted that this embodiment is a system example corresponding to the first embodiment, and can be implemented in cooperation with the first embodiment. The related technical details mentioned in the first embodiment are still valid in this embodiment, and in order to reduce repetition, a detailed description is omitted here. Accordingly, the related art details mentioned in the present embodiment can also be applied to the first embodiment.
It should be noted that each module in this embodiment is a logic module, and in practical application, one logic unit may be one physical unit, or may be a part of one physical unit, or may be implemented by a combination of multiple physical units. In addition, in order to highlight the innovative part of the present application, units that are not so close to solving the technical problem presented by the present application are not introduced in the present embodiment, but this does not indicate that other units are not present in the present embodiment.
Third embodiment
In addition, some embodiments of the application also provide an electronic device. The electronic device may be a digital computer in various forms, such as a laptop computer, a desktop computer, a workstation, a personal digital assistant, a server, a blade server, a mainframe computer, and the like. The electronic device may also be various forms of mobile equipment, such as personal digital processing, cellular telephones, smart phones, wearable devices, and other similar computing devices.
The electronic device comprises one or more processors and a memory storing computer program instructions that, when executed, cause the processors to perform the steps of a method as provided in any one or more of the embodiments described above. Fig. 11 discloses an exemplary structural diagram of the electronic device. As shown in fig. 11, the electronic device includes one or more processors 1101, a memory 1102, and interfaces for connecting the components, including a high-speed interface and a low-speed interface. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions executing within the electronic device, including instructions stored in or on memory to display graphical information of the GUI on an external input/output device, such as a display device coupled to the interface. In some other embodiments, multiple processors and/or multiple buses may be used, if desired, along with multiple memories and multiple memories. Also, multiple electronic devices may be connected, each providing a portion of the necessary operations (e.g., as a server array, a set of blade servers, or a multiprocessor system). Wherein the components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the applications described and/or claimed herein.
The electronic device may further comprise input means 1103 and output means 1104. The processor 1101, memory 1102, input device 1103 and output device 1104 may be connected by a bus or other means, for example in fig. 11.
The input device 1103 may receive input digital or character information and generate key signal inputs related to user settings and function control of the electronic device, such as a touch screen, keypad, mouse, trackpad, touchpad, pointer stick, one or more mouse buttons, trackball, joystick, and like input devices. The output device 1104 may include a display device, auxiliary lighting (e.g., LEDs), and haptic feedback (e.g., a vibration motor), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device may be a touch screen.
To provide for interaction with a user, the electronic device may be a computer. The computer has a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user and a keyboard and a pointing device (e.g., a mouse or a trackball) by which the user can provide input to the computer. Other types of devices may also be used to provide interaction with the user, for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback), and input from the user may be received in any form (including acoustic input, speech input, or tactile input).
Fourth embodiment
In an embodiment of the present application, a computer readable medium has stored thereon a computer program/instruction which, when executed by a processor, implements the steps of the method provided by any one or more of the embodiments described above. The computer readable medium may be contained in the electronic device described in the above embodiment or may exist alone without being incorporated in the device. The computer-readable medium carries one or more computer-readable instructions.
Memory 1102 may be used as a non-transitory computer readable storage medium for storing non-transitory software programs, non-transitory computer executable programs, and modules. The processor 1101 executes various functional applications of the server and data processing by running non-transitory software programs, instructions, and modules stored in the memory 1102 to implement program instructions/modules corresponding to the methods provided by any one or more of the embodiments of the present application.
The memory 1102 may include a storage program area that may store an operating system, application programs required for at least one function, and a storage data area that may store data created according to the use of the electronic device, etc. In addition, memory 1102 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage device. In some embodiments, memory 1102 optionally includes memory remotely located relative to processor 1101, which may be connected to the electronic device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The computer readable medium according to the present application may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of a computer-readable storage medium may include, but are not limited to, an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access Memory (RAM, random Access Memory), a Read-Only Memory (ROM), an erasable programmable Read-Only Memory (EPROM or flash Memory), an optical fiber, a portable compact disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
Computer-readable media include both permanent and non-permanent, removable and non-removable media, and information storage may be implemented by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape storage or other magnetic storage devices, or any other non-transmission medium which can be used to store information that can be accessed by a computing device.
Computer program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. For example, an Application Specific Integrated Circuit (ASIC), a general purpose computer, or any other similar hardware device may be employed. In some embodiments, the software program of the present application may be executed by a processor to implement the above steps or functions. Likewise, the software programs of the present application (including associated data structures) may be stored on a computer readable recording medium, such as RAM memory, magnetic or optical drive or diskette and the like. In addition, some steps or functions of the present application may be implemented in hardware, for example, as circuitry that cooperates with the processor to perform various steps or functions.
Fifth embodiment
Embodiments of the present application provide a computer program product comprising one or more computer programs/instructions which, when executed by a processor, produce, in whole or in part, a process or function in accordance with embodiments of the present application. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable devices. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another, for example, by wired (e.g., coaxial cable, optical fiber, digital Subscriber Line (DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid state disk Solid STATE DISK (SSD)), etc.
The flowchart or block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of devices, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The scope of the application is indicated by the appended claims rather than by the foregoing description, and all changes that come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is evident that the word "comprising" does not exclude other elements or steps, and that the singular does not exclude a plurality. A plurality of units or means recited in the apparatus claims can also be implemented by means of one unit or means in software or hardware. The words "first," "second," and the like are used merely to distinguish between descriptions and do not indicate any particular order, nor are they to be construed as indicating or implying relative importance.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art can easily mention variations or alternatives within the scope of the present application. The present application is therefore to be considered in all respects as illustrative and not restrictive, and the scope of the application is indicated by the appended claims.