Disclosure of Invention
Based on the problems, the invention provides a data management method and a system for an API docking platform of the Internet of things, various anomalies can be timely and accurately identified through the scheme of the invention, the identification accuracy and speed are improved, and the abnormal event is described as structured data, so that the subsequent storage, inquiry and analysis are facilitated.
In view of this, an aspect of the present invention proposes a data management method for an API docking platform of the internet of things, including:
Acquiring first original data from Internet of things equipment, and performing unified standardization processing on the first original data through a self-adaptive data format conversion module to obtain first data;
carrying out semantic analysis on the normalized first data by utilizing a semantic analysis module based on a machine learning technology, automatically generating a data element model and constructing a knowledge graph;
The first data, the data meta-model and the knowledge graph are stored and traced by using a blockchain technology through a blockchain-based data storage module, so that verifiability and non-tamper resistance of the first data are ensured;
acquiring first real-time data from Internet of things equipment, and performing unified standardization processing on the first real-time data through a self-adaptive data format conversion module to obtain second data;
analyzing the second data by using the data meta-model and the knowledge graph to obtain third data;
The real-time data flow analysis module adopts a real-time data flow analysis technology to carry out intelligent analysis and processing on the third data, and dynamically optimizes the API response speed by combining a network flow regulation algorithm;
Extracting first image data acquired by preset monitoring equipment from the second data;
Using a monitoring analysis module based on image recognition, performing intelligent analysis on the first image data based on image recognition and deep learning technology, recognizing abnormal conditions and triggering early warning;
Generating abnormal event description data according to a preset abnormal event model according to the result of identifying the abnormal condition, and storing the abnormal event description data.
Optionally, the step of obtaining the first original data from the internet of things device and performing unified standardization processing on the first original data through the adaptive data format conversion module to obtain the first data includes:
Constructing a universal interface supporting uploading data of different types of Internet of things equipment;
Constructing an adaptive data format conversion module based on machine learning aiming at the data formats of different devices;
The self-adaptive data format conversion module is utilized to perform format conversion on the first original data, metadata annotation is performed on the converted data, and metadata such as equipment ID, data type and acquisition time are recorded;
A conversion log is generated for the first raw data.
Optionally, the step of performing semantic analysis on the normalized first data by using a semantic analysis module based on a machine learning technology, automatically generating a data meta-model and constructing a knowledge graph includes:
inputting the normalized first data to a semantic analysis module;
Carrying out semantic analysis on the first data by utilizing natural language processing and machine learning technology, and identifying semantic elements such as concepts, entities and relations contained in the first data;
automatically constructing a data meta-model based on the semantic analysis result of the previous step;
Converting the identified semantic elements into nodes and edges in the knowledge graph, and constructing a structured knowledge base containing semantic information;
and continuously learning and optimizing the semantic analysis, the data element model and the knowledge graph by using a machine learning technology.
Optionally, the step of storing and tracing the first data, the data meta-model and the knowledge graph by using a blockchain technology through a blockchain-based data storage module to ensure verifiability and non-tamper resistance of the first data includes:
Selecting a proper blockchain platform, deploying blockchain network nodes, constructing a distributed account book, and generating a first blockchain network;
Uploading the first data, the data element model and the knowledge graph to a first blockchain network, and storing the first data, the data element model and the knowledge graph in a distributed account book in a block mode through an intelligent contract;
Establishing a digital identity certificate for identity authentication and authority management for each participant of an API docking platform of the Internet of things;
setting access control strategies of data according to different roles of each participant so as to ensure the security of the data;
Recording operations of creating, modifying and accessing the first data, the data meta-model and the knowledge graph, and generating an audit log;
Constructing a data query interface for each participant to query and verify the data content according to the requirements;
according to business requirements and technical development, the performance and the function of the blockchain network are optimized to support flexible integration of new data types and application scenes.
Optionally, the step of analyzing the second data by using the data meta-model and the knowledge graph to obtain third data includes:
performing semantic checksum mapping on the second data and the data meta-model;
Analyzing the second data mapped by the checksum by utilizing the entity, the attribute and the relation in the knowledge graph, and deducing third data by applying an inference rule, wherein the third data comprises a data mining result, algorithm analysis output, prediction model discovery, user feedback, a market research result and an expert evaluation result;
And outputting the third data in a proper format according to the requirements of the application scene.
Optionally, the step of the real-time data flow analysis module adopting a real-time data flow analysis technology to intelligently analyze and process the third data and dynamically optimizing the API response speed in combination with a network flow regulation algorithm includes:
Receiving and processing the third data using the streaming computing engine;
Designing a real-time analysis algorithm aiming at service requirements to analyze the third data to obtain a real-time analysis result;
and monitoring the access flow and response time of the API interface, applying a dynamic flow adjustment algorithm based on feedback control, adjusting algorithm parameters according to a real-time analysis result, and dynamically optimizing the API response speed.
Optionally, the step of extracting the first image data collected by the preset monitoring device from the second data includes:
Analyzing the received second data, and identifying and extracting an image data frame carrying a mark of a preset monitoring device as first image data;
performing format conversion and compression on the first image data according to service requirements;
Checking the integrity and validity of the extracted first image data;
Performing basic enhancement and optimization processing on the first image data;
the extracted first image data is stored in a lasting mode, and historical data query and analysis are supported;
A design data management module providing fast retrieval and access capabilities for the first image data;
According to the service requirement, data encryption and access control measures are implemented to ensure the integrity and confidentiality of the first image data, and the privacy information is protected.
Optionally, the step of intelligently analyzing the first image data based on image recognition and deep learning technology by using a monitoring analysis module based on image recognition, recognizing abnormal conditions and triggering early warning includes:
Performing preprocessing operations of format conversion, scale adjustment and color correction on the first image data to obtain second image data;
acquiring historical monitoring image data, and constructing an image sample data set according to the historical monitoring image data;
Selecting a proper deep learning model architecture;
performing supervised learning on the model by using the image sample data set, and optimizing model parameters;
Through model evaluation and tuning, the recognition accuracy and generalization capability of the model are improved;
deploying the trained deep learning model into a monitoring analysis module;
Defining a judgment rule of abnormal conditions;
carrying out intelligent analysis on the second image data by utilizing a deep learning model in combination with a judging rule, and identifying whether an abnormal condition exists;
when an abnormal condition exists, triggering an early warning mechanism;
and classifying and evaluating the severity of the abnormal situation according to the service requirement.
Optionally, the step of generating the abnormal event description data according to the result of identifying the abnormal situation and the preset abnormal event model and storing the abnormal event description data includes:
determining basic information of an abnormal event;
Converting the identified basic information of the abnormal event into standardized abnormal event description data according to a predefined abnormal event model;
and storing the generated abnormal event description data into a preset database or file system.
The invention provides a data management system for an API docking platform of the Internet of things, which comprises Internet of things equipment, monitoring equipment, a management server and a data processing module, wherein the management server is provided with a self-adaptive data format conversion module, a semantic analysis module, a data storage module, a real-time data stream analysis module and a monitoring analysis module;
The management server is configured to:
Acquiring first original data from Internet of things equipment, and performing unified standardization processing on the first original data through a self-adaptive data format conversion module to obtain first data;
carrying out semantic analysis on the normalized first data by utilizing a semantic analysis module based on a machine learning technology, automatically generating a data element model and constructing a knowledge graph;
The first data, the data meta-model and the knowledge graph are stored and traced by using a blockchain technology through a blockchain-based data storage module, so that verifiability and non-tamper resistance of the first data are ensured;
acquiring first real-time data from Internet of things equipment, and performing unified standardization processing on the first real-time data through a self-adaptive data format conversion module to obtain second data;
analyzing the second data by using the data meta-model and the knowledge graph to obtain third data;
The real-time data flow analysis module adopts a real-time data flow analysis technology to carry out intelligent analysis and processing on the third data, and dynamically optimizes the API response speed by combining a network flow regulation algorithm;
Extracting first image data acquired by preset monitoring equipment from the second data;
Using a monitoring analysis module based on image recognition, performing intelligent analysis on the first image data based on image recognition and deep learning technology, recognizing abnormal conditions and triggering early warning;
Generating abnormal event description data according to a preset abnormal event model according to the result of identifying the abnormal condition, and storing the abnormal event description data.
The technical scheme of the invention is adopted, and the data management method for the API docking platform of the Internet of things comprises the steps of obtaining first original data from equipment of the Internet of things, and carrying out unified standardization processing on the first original data through a self-adaptive data format conversion module to obtain first data; the method comprises the steps of carrying out semantic analysis on standardized first data by utilizing a semantic analysis module based on a machine learning technology, automatically generating a data meta-model and constructing a knowledge graph, storing and tracing the first data, the data meta-model and the knowledge graph by utilizing a data storage module based on a blockchain technology, ensuring the verifiability and the non-tamper property of the first data, obtaining first real-time data from Internet of things equipment, carrying out unified standardization processing on the first real-time data by utilizing an adaptive data format conversion module to obtain second data, carrying out intelligent analysis and processing on the second data by utilizing the data meta-model and the knowledge graph to obtain third data, dynamically optimizing an API response speed by combining a network flow regulation algorithm by utilizing a real-time data flow analysis technology, extracting first image data collected by preset monitoring equipment from the second data, carrying out intelligent analysis on the first image data, carrying out early warning and triggering on abnormal event recognition by utilizing an image recognition and depth learning technology, carrying out early warning on the abnormal event recognition, and carrying out description on the abnormal event description data according to the preset abnormal event, and carrying out early warning on the abnormal event recognition and describing the abnormal event. The method and the device can timely and accurately identify various abnormal conditions and acquire relevant basic information by utilizing advanced image identification and deep learning technologies, continuously improve the accuracy and the identification speed of abnormal event identification by continuous data analysis and algorithm optimization, convert the identified abnormal conditions into structured abnormal event description data according to a predefined abnormal event model, facilitate subsequent storage, inquiry and analysis by a standardized data format, improve the readability and the treatability of information, permanently store the generated abnormal event description data into a database or a file system, construct a complete abnormal event history record, and provide reliable data support for subsequent data analysis, tracing and auditing.
Detailed Description
In order that the above-recited objects, features and advantages of the present application will be more clearly understood, a more particular description of the application will be rendered by reference to the appended drawings and appended detailed description. It should be noted that, without conflict, the embodiments of the present application and features in the embodiments may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, but the present invention may be practiced otherwise than as described herein, and therefore the scope of the present invention is not limited to the specific embodiments disclosed below.
The terms first, second and the like in the description and in the claims and in the above-described figures are used for distinguishing between different objects and not necessarily for describing a sequential or chronological order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
A data management method and system for an API docking platform of the internet of things according to some embodiments of the present invention are described below with reference to fig. 1 to 2.
As shown in fig. 1, an embodiment of the present invention provides a data management method for an API docking platform of the internet of things, including:
Acquiring first original data from Internet of things equipment, and performing unified standardization processing on the first original data through a self-adaptive data format conversion module to obtain first data;
In this step, the Internet of things devices include, but are not limited to, sensing devices (e.g., temperature and humidity sensors, illumination sensors, position sensors (GPS, RFID, bluetooth, etc.), motion sensors (accelerometers, gyroscopes, etc.), environmental monitoring sensors (air quality, water quality, etc.), physiological health sensors (heart rate, blood pressure, etc.), executing devices (e.g., industrial automation devices (robotic arms, pipelines, etc.), home intelligent devices (e.g., lights, air conditioners, appliances, etc.), smart mobile devices such as drones/robots, etc.), gateway devices (e.g., routers, bridges, protocol converters, etc.), communication devices (smart phones, tablet computers, internet of things modules (WiFi/Bluetooth/NB-IoT, etc.), internet of things gateways, etc.), storage devices (e.g., cloud storage servers, edge computing devices, etc.), processing devices (e.g., servers, embedded microprocessors, industrial PCs, etc.), and the like. The self-adaptive data format conversion module comprises a data acquisition adapter, a unified data model mapping engine, a real-time data preprocessing function and a data preprocessing function, wherein the data acquisition adapter can automatically identify and adapt to data formats of various Internet of things devices, the unified data model mapping engine converts original data into a standardized data model, and the operations such as cleaning, correcting and normalizing are performed on the data.
Carrying out semantic analysis on the normalized first data by utilizing a semantic analysis module based on a machine learning technology, automatically generating a data element model and constructing a knowledge graph;
in the step, the semantic analysis module based on machine learning comprises a natural language processing engine, a knowledge graph construction algorithm, a transfer learning model and a pre-training model, wherein the natural language processing engine is used for extracting key concepts, attributes and entities in data, the knowledge graph construction algorithm is used for establishing semantic association between the data and forming a knowledge network, and the pre-training model is used for rapidly adapting to data semantics in different fields.
The first data, the data meta-model and the knowledge graph are stored and traced by using a blockchain technology through a blockchain-based data storage module, so that verifiability and non-tamper resistance of the first data are ensured;
in the step, the data storage module based on the blockchain comprises a distributed account book storage engine, an intelligent contract management system, a privacy computing technology and a privacy computing technology, wherein the distributed account book storage engine stores data in a non-tamperable blockchain structure, the intelligent contract management system defines rules of data access and sharing and automatically executes the rules, and the privacy computing technology realizes safe processing and joint analysis of sensitive data.
Acquiring first real-time data from Internet of things equipment, and performing unified standardization processing on the first real-time data through a self-adaptive data format conversion module to obtain second data;
In the method, original data are acquired from an internet device in real time through various connection protocols (such as MQTT, HTTP, coAP and the like), access of various data coding formats (such as JSON, XML, binary and the like) is supported, the method has the capability of automatically identifying device types and data formats, corresponding data analyzers are dynamically invoked to analyze according to the device types and the data coding formats, the original data are converted into a unified target data format through a self-adaptive data mapping module, new device types and data formats are supported to be expanded, a core system is not required to be modified, semantic and grammar check is performed on the converted data to ensure the integrity and validity of the data, uniform unit conversion, numerical standardization and the like are performed on the data according to predefined data standards to generate second data meeting the requirements of the target system, data are temporarily stored by adopting a caching mechanism for network abnormality or offline data, and are automatically retransmitted when the network is restored, real-time monitoring and query of the cached data are supported, fault checking and data supplementing are facilitated, an asynchronous non-blocking I/O model is adopted, a large number of device access and data conversion tasks can be processed simultaneously, and a memory resource and storage capacity of a parallel system is optimized according to the hardware resource expansion and dynamic resource searching technology. Through the implementation steps, multiple Internet of things equipment types and data coding formats can be supported, the system has self-adaptive data access capability, can rapidly and accurately convert original data into a standardized target data format, has mechanisms such as data verification, caching and retransmission, ensures the integrity and reliability of the data, can support large-scale equipment access and data processing by adopting asynchronous non-blocking multi-thread/multi-process architecture design, supports dynamic integration of new equipment types and data formats, and does not need to modify a core system. In a word, the scheme can effectively acquire real-time data from the Internet of things equipment, and provide high-quality data in a unified format for an upper application system through self-adaptive data format conversion and standardization processing, so that various requirements of Internet of things application are met.
Analyzing the second data by using the data meta-model and the knowledge graph to obtain third data;
The real-time data flow analysis module adopts a real-time data flow analysis technology to carry out intelligent analysis and processing on the third data, and dynamically optimizes the API response speed by combining a network flow regulation algorithm;
In the step, the real-time data flow analysis module comprises a complex event processing engine, a dynamic flow regulation algorithm, a joint optimization engine, a coordination data analysis module, a network regulation module, a safety strategy module and the like, wherein the complex event processing engine monitors an abnormal mode in a data flow in real time and triggers early warning, the dynamic flow regulation algorithm automatically adjusts an API response strategy according to a network load condition.
Extracting first image data acquired by preset monitoring equipment from the second data;
Using a monitoring analysis module based on image recognition, performing intelligent analysis on the first image data based on image recognition and deep learning technology, recognizing abnormal conditions and triggering early warning;
The monitoring analysis module based on image recognition comprises a video monitoring adapter, a deep learning image recognition model, a multi-source data fusion engine and a multi-source data fusion engine, wherein the video monitoring adapter is connected with various monitoring devices and unifies data formats, the deep learning image recognition model is used for realizing target detection, abnormal behavior recognition and the like of images, and the multi-source data fusion engine is used for comprehensively analyzing video streams, sensor data and the like.
Generating abnormal event description data according to a preset abnormal event model according to the result of identifying the abnormal condition, and storing the abnormal event description data.
The technical scheme of the embodiment is adopted, advanced image recognition and deep learning technology is utilized, various abnormal conditions can be timely and accurately recognized, relevant basic information is obtained, the accuracy rate and recognition speed of abnormal event recognition are continuously improved through continuous data analysis and algorithm optimization, the recognized abnormal conditions are converted into structured abnormal event description data according to a predefined abnormal event model, the standardized data format is beneficial to subsequent storage, query and analysis, the readability and the treatability of information are improved, the generated abnormal event description data are stored in a database or a file system in a lasting mode, a complete abnormal event history record is constructed, and reliable data support is provided for subsequent data analysis, tracing and auditing.
In some possible embodiments of the invention, security measures such as encryption storage, authority control and the like can be adopted for abnormal event data to ensure the security and privacy of the data, reasonable data access authority management is realized according to the requirements of different roles to prevent unauthorized access, a log recording mechanism is established to track the access and operation conditions of the data, the abnormal event description data is utilized to combine with other monitoring data to carry out deep abnormal analysis and event association, more accurate and comprehensive abnormal event information is provided for management personnel to support rapid decision and response, the recognition and processing capacity of the abnormal event is continuously improved through continuous data analysis and feedback optimization, the continuous storage and analysis of the abnormal event are beneficial to finding out system loopholes, the monitoring strategy is optimized, and more valuable data support is provided for the operation and maintenance management of the monitoring system.
In general, the scheme of the embodiment can realize intelligent identification, standardized description, safe storage and deep analysis of abnormal events, and provides powerful support for improving the overall reliability and operation and maintenance efficiency of the monitoring system.
In some possible embodiments of the present invention, the step of obtaining first original data from an internet of things device and performing unified normalization processing on the first original data through an adaptive data format conversion module to obtain first data includes:
Constructing a universal interface supporting uploading data of different types of Internet of things equipment (the interface needs to define a standard data format and a transmission protocol);
Constructing a self-adaptive data format conversion module based on machine learning aiming at the data formats of different devices (the module can learn and identify the original data of different formats, extract effective fields and perform standardized processing, and the standardized processing comprises the steps of data cleaning, missing data filling, data type conversion and the like, so as to convert the data of different formats into a uniform internal data format);
the self-adaptive data format conversion module is utilized to convert the format of the first original data, metadata marking is carried out on the converted data, and metadata such as equipment ID, data type and acquisition time (the metadata is important for subsequent data analysis and application) are recorded;
A conversion log (which can track the whole process from data acquisition to standardization, and is convenient for problem investigation) of the first original data is generated.
In the embodiment, various original monitoring data (such as video images, temperature, humidity and the like) are acquired through devices such as an Internet of things sensor and a camera, connectivity and a data transmission channel between the acquisition device and an API (application program interface) docking platform of the Internet of things are ensured, data formats used by the Internet of things devices of different manufacturers are different, unified standardized processing is required to be carried out on the original data through an adaptive data format conversion module, the original data are converted into unified data formats which can be read and understood by the platform, such as JSON (Java, XML) and the like, quality inspection is also required to be carried out on the data while format conversion is carried out, the integrity, accuracy and effectiveness of the data are ensured, subsequent analysis errors caused by the quality problem of the data are avoided, the standardized data subjected to format conversion and quality verification are stored in a database of the API docking platform of the Internet of things, and a reliable data basis is provided for subsequent data management, analysis and application.
Through the steps, effective format unification and quality control can be realized on the original data from different Internet of things devices, the data gap between different devices/systems is eliminated, unified data basis is provided for subsequent data analysis and application, the integrity and accuracy of the data are ensured through format conversion and quality verification, the readability and usability of the data are enhanced, the data format difference of various devices is not required to be paid attention, the data access and processing flow is simplified, the working efficiency is improved, the standardized data format is beneficial to subsequent data mining and machine learning, and the data analysis capability and the intelligent level of the Internet of things system are improved. In a word, the realization of the link builds a high-quality data base for the API docking platform of the Internet of things, and provides solid support for subsequent data management, analysis and application.
In this embodiment, the specific construction of the adaptive data format conversion module includes the following steps:
Identifying the specific format adopted by the data (supporting common data formats such as JSON, XML, CSV, excel and the like) by analyzing the grammar structure, key fields and the like of the original data;
establishing metadata knowledge base of various data formats, including format specification, field definition, semantic annotation and other information;
dynamically expanding metadata information for a new data format;
Selecting a proper conversion algorithm (supporting bidirectional conversion among various formats to realize standardization of data) according to the identified original data format;
In this step, the selection of the appropriate conversion algorithm based on the identified raw data format may be analyzed from the aspects of identifying the raw data format (prior to processing the data, the source and format of the data need first be identified, e.g., JSON, XML, CSV, binary, etc.; the different data formats have different structures and characteristics), selecting a proper conversion algorithm (the proper conversion algorithm is selected according to the identified data format, which comprises a parsing algorithm which can be understood and processed by a system, the conversion algorithm which can convert the data from one format to another format, for example, convert JSON data to XML format or convert CSV data to database records), bidirectional conversion support (in many application scenarios, conversion between multiple data formats may be needed, thus, supporting bidirectional conversion is very important, for example, the system can convert the data in the JSON format to XML and can convert the XML data back to JSON at the same time), standardization of the data (by selecting the proper conversion algorithm, the standardization of the data can be realized, which means that the data can be uniformly converted to one standard format regardless of the data source format, the subsequent processing and analysis are convenient, the standardization can improve the consistency and usability of the data from different sources, so that the data from different sources can be combined seamlessly), compatibility and the selected conversion can be well compatible with the new format can be easily achieved through the new expansion algorithm, the new expansion algorithm can be easily added when the new expansion algorithm is needed, it can be seen that this process ensures flexible conversion of data between different formats, improving the efficiency and accuracy of data processing, thereby providing a reliable basis for subsequent data analysis and application.
The field semantics are kept and the data integrity is considered in the conversion process;
the module is internally provided with pluggable modular design;
Supporting dynamic expansion of new data formats without modifying core codes;
The flexible expansion and upgrading are convenient to follow according to the service requirements;
the high-efficiency data analysis and conversion algorithm is adopted, so that the conversion efficiency is ensured;
the throughput of data processing is improved by using the technologies of caching, parallel computing and the like;
For data with nonstandard format or conversion failure, a friendly exception handling mechanism is provided;
and recording abnormal information, so that the follow-up problem investigation and module optimization are facilitated.
Through the design, the self-adaptive data format conversion module can realize the identification and automatic conversion of various heterogeneous data formats, provides a flexible and extensible architecture, adapts to the change of service requirements, ensures the high efficiency and the robustness of data conversion, reduces the complexity of data integration, provides a standardized data basis for the subsequent data analysis and application, is a key component of the data management of the API docking platform of the Internet of things, and plays an important role in normalizing the original data.
In some possible embodiments of the present invention, the step of using a semantic analysis module based on machine learning technology to perform semantic analysis on the normalized first data, automatically generating a data meta-model, and constructing a knowledge graph includes:
inputting the normalized first data to a semantic analysis module;
Carrying out semantic analysis on the first data by utilizing natural language processing and machine learning technology, and identifying semantic elements such as concepts, entities and relations contained in the first data;
automatically constructing a data meta-model (the model comprises definitions of elements such as data objects, attributes, relations and the like and provides a meta-data basis for structured expression and subsequent application of data) based on the semantic analysis result of the previous step;
Converting the identified semantic elements into nodes and edges (the nodes represent concepts and entities, and the edges represent semantic relations between the concepts and the entities) in the knowledge graph, and constructing a structured knowledge base containing semantic information;
And the semantic analysis, the data element model and the knowledge graph are continuously learned and optimized by utilizing a machine learning technology (along with the increase of the processed data quantity, the analysis accuracy and the knowledge coverage range are improved).
According to the embodiment, the method and the device for data content data conversion can be used for improving the deep semantic understanding of the data content from simple format conversion, and knowing concepts, entities and relations behind the data, automatically generating a data element model, providing standardized metadata guidance for data application, constructing a knowledge graph, converting the data into a structural knowledge base rich in semantic information, developing more intelligent data application such as intelligent question and answer, knowledge reasoning, decision support and the like based on the semantic understanding and the knowledge graph, deeply understanding the data connotation, finding hidden modes and rules, and creating more values for subsequent data analysis and application. In a word, the realization of the link brings the conversion from data to knowledge for the API docking platform of the Internet of things, greatly improves the understanding and application capability of the data, and lays a foundation for the development of more intelligent applications.
In some possible embodiments of the present invention, the method further comprises:
Determining scene requirements and data characteristics, namely analyzing key entities, attributes and relations in an application scene of the Internet of things, and identifying types, formats and semantic characteristics of data sources;
Establishing a data meta-model, namely defining core concepts and entities in a scene, describing attributes, relations and constraints of the entities, and determining standards of data exchange, storage and query;
the method comprises the steps of establishing a domain knowledge graph, collecting and integrating knowledge resources in related domains, extracting entities and relations, and establishing the knowledge graph;
Integrating the data meta-model with the knowledge graph, namely mapping entities and attributes in the data meta-model to the knowledge graph, enhancing the semantic expression capacity of the data meta-model by utilizing the knowledge graph, and realizing the bidirectional association and synchronous updating of data and knowledge;
Developing application services based on semantics, namely supporting functions of semantic searching, reasoning, question-answering and the like by utilizing a knowledge graph, feeding back semantic analysis results to a data meta-model, improving the accuracy of data analysis, developing application services of intelligent decision, prediction and the like aiming at specific scenes;
Continuous optimization and expansion, namely continuously perfecting a data meta-model and a knowledge graph according to user feedback, exploring cross-domain knowledge fusion, improving universality and adaptability, and introducing technologies such as machine learning and the like to continuously enhance semantic analysis capability.
Through the embodiment, the data element model and the knowledge graph can be fully applied to specific Internet of things scenes, the data semantic understanding and intelligent application level is improved, and personalized requirements of different industries are met.
In some possible embodiments of the present invention, implementing bidirectional synchronization update between a data meta-model and a knowledge graph may be performed by:
Establishing a mapping relation between the entity and the attribute, namely identifying the core entity and the attribute in the data meta-model, and establishing the mapping relation between the entity and the attribute and the corresponding node and the attribute in the knowledge graph;
Synchronizing the data meta-model to the knowledge graph, namely importing the entity, attribute and relation information in the data meta-model to the knowledge graph, and supplementing and enriching semantic relations among the entities by utilizing an inference mechanism of the knowledge graph;
Synchronizing the knowledge graph to the data element model, namely monitoring the changes of entities, attributes and relations in the knowledge graph, and feeding back the changes to the data element model to ensure that the data model is always synchronized with the knowledge;
establishing an automatic synchronization mechanism, namely designing events and rules triggering synchronization, such as newly added entities, attribute changes and the like, developing an automatic tool for regular scanning and comparison of data, and ensuring the reliability and consistency of the synchronization process;
The conflict in the synchronization process is processed by identifying the conflict between the data element model and the knowledge graph, such as data type mismatch, and the like;
visual synchronous monitoring is provided, namely a development instrument board displays synchronous states and change histories, and manual intervention and manual adjustment of synchronous rules are supported.
Through the bidirectional synchronization mechanism, the data element model and the knowledge graph can be ensured to be kept highly consistent, and semantic analysis and decision service of the application of the Internet of things are supported. Meanwhile, accumulation and continuous optimization of knowledge can be promoted, and the intelligent level of the whole Internet of things system is improved.
In some possible embodiments of the present invention, the step of storing and tracing the first data, the data meta-model and the knowledge graph by using a blockchain technology through a blockchain-based data storage module to ensure verifiability and non-tamper resistance of the first data includes:
selecting proper blockchain platforms (such as Ethernet, HYPERLEDGER FABRIC and the like), deploying blockchain network nodes, constructing a distributed account book, and generating a first blockchain network;
In this step, when selecting a proper blockchain platform, several key factors (such as transaction throughput: transaction number processed per second, transaction confirmation time: time when transactions are finally written into the blockchain, expandability: supported concurrent transaction number and data scale, etc.), security (such as security of consensus mechanism: security against 51% attacks, etc.), security of cryptography algorithm: security against quantum computing attacks, security of intelligent contracts: security against intelligent contract vulnerabilities, etc.), programmability (such as maturity and availability of intelligent contract language, perfection of development tools and SDKs, community liveness and ecological environment, etc.), supervision and compliance (whether meeting industry supervision policies and standards, supporting various types of digital assets, providing privacy protection, etc.), cost and deployment (cost of node deployment and maintenance, transaction hand fee and resource consumption, integration difficulty and development investment, etc.), according to these factors, the application of the most suitable networking platform for implementing the present embodiment of the present application of the networking is selected by comparing the main blockchain platform, such as ethernet, HYPERLEDGER FABRIC, corda, EOS, etc. The method comprises the steps of selecting a blockchain platform with high throughput and low confirmation time, supporting large-scale data storage and transaction requirements, selecting a blockchain platform with higher safety, resisting various attacks, guaranteeing the safety of data and transactions, selecting a blockchain platform with more perfect development tools and SDKs, reducing the complexity of development and integration, improving the development efficiency, selecting a blockchain platform conforming to industry supervision standards, guaranteeing the compliance of a system, reducing supervision risks, selecting a blockchain platform with good expandability, supporting the growth of future service scale and the addition of new functions, selecting a blockchain platform with lower deployment and operation cost, reducing the total possession cost of the system, comprehensively considering the factors, selecting a blockchain platform which is most suitable for an API (application program interface) docking platform of the Internet of things, and laying a solid foundation for the development of future services.
Uploading the first data, the data element model and the knowledge graph to a first blockchain network, and storing the first data, the data element model and the knowledge graph in a distributed account book in a block mode through an intelligent contract;
Establishing a digital identity certificate for identity authentication and authority management for each participant of an API docking platform of the Internet of things;
setting access control strategies of data according to different roles of each participant so as to ensure the security of the data;
Recording operations of creating, modifying and accessing the first data, the data meta-model and the knowledge graph, and generating an audit log;
Constructing a data query interface for each participant to query and verify data content according to requirements (by utilizing a consensus mechanism of a blockchain, the accuracy and consistency of query results are ensured);
according to business requirements and technical development, the performance and the function of the blockchain network are optimized to support flexible integration of new data types and application scenes.
The scheme of the embodiment can ensure the authenticity and the integrity of data by using a distributed account book and an encryption technology of a blockchain, realize traceability of the whole data operation process, reduce the risk of data tampering, realize the fine authority management of the data by identity authentication and access control, protect the privacy of the data, meet the requirements of related regulations and compliance, realize the data sharing and collaboration among different participators by virtue of the characteristics of inter-structure collaboration of a blockchain network, improve the data utilization efficiency, automatically execute the processes of data management, audit and the like by virtue of the blockchain technology, reduce the manual intervention, reduce the operation cost of the data management, ensure the data non-tamper modification by the blockchain, strengthen the trust of a data user and provide a reliable basis for the subsequent analysis and application based on the data. In a word, the realization of the link brings the data management capability of block chain energization to the API docking platform of the Internet of things, greatly improves the safety, the credibility and the value of data, and lays a foundation for the robustness and the sustainable development of the whole platform system.
In some possible embodiments of the present invention, the step of analyzing the second data by using the data meta-model and the knowledge-graph to obtain third data includes:
performing semantic checksum mapping on the second data and the data meta-model;
analyzing the second data mapped by the checksum by utilizing the entity, the attribute and the relation in the knowledge graph, and deducing third data by applying an inference rule;
in this step, the third data includes, but is not limited to, data mining results, algorithm analysis output, predictive model discovery, user feedback, market research results, expert evaluation results, and the like.
And outputting the third data in a proper format according to the requirements of the application scene (providing forms of visualization, report forms and the like, intuitively presenting analysis results, and supporting real-time updating and incremental calculation of the third data).
In this step, according to the requirements of the application scenario, outputting the third data in a suitable format may be analyzed from the following aspects:
Requirements of application scenarios:
different application scenarios may have different presentation and content requirements for the data, for example, a security monitoring scenario may require real-time alarm information, while a business analysis scenario may be more concerned with data trends and statistical reporting.
Suitable format output:
the output format should be customized according to the user's needs, and common forms include:
and the data are visualized by using charts, dashboards and the like, so that a user can quickly understand and analyze the data. For example, a line graph shows trends and a pie graph shows scale.
And generating a report, namely generating a structured document which contains detailed analysis results and statistical data for a decision maker to conduct deep research and reference.
Visually presenting the analysis result:
Through clear visualization and reporting, the user is helped to better understand the analysis result and identify trends and anomalies. This intuitiveness can improve the decision making efficiency of the user.
Updating in real time:
in order to maintain timeliness of the data, the output third data should support real-time updating, which means that the system can continuously receive new data and reflect the new data in the visual interface or report immediately.
And (3) incremental calculation:
Incremental computation may increase efficiency when processing large-scale data, which allows the system to process only newly added or changed data, rather than re-computing all data each time, thereby increasing response speed.
In summary, this process ensures that the results of the data analysis are delivered to the user in a suitable form at a suitable time, thereby enabling efficient decision support and traffic optimization.
According to the scheme, the data element model and the knowledge graph are combined to conduct deep analysis on the second data, implication relations among the data can be found to generate more valuable third data, analysis results are output and displayed in a proper form according to application requirements, real-time updating and incremental calculation of the results are supported to meet dynamic changes of service requirements, the data element model and the knowledge graph have good expandability and can adapt to service development, and the analysis model and the algorithm can be migrated and multiplexed across application scenes. In a word, the scheme can fully utilize the data element model and the knowledge graph to deeply analyze the second data, mine the third data with higher value, provide decision support and business hole for upper-layer application, and meet various analysis requirements of the application of the Internet of things.
In some possible embodiments of the present invention, the step of using a real-time data flow analysis technique to intelligently analyze and process the third data and dynamically optimize the API response speed in combination with a network traffic control algorithm by the real-time data flow analysis module includes:
receiving and processing the third data using a streaming computing engine (e.g., APACHE SPARK STREAMING, APACHE FLINK, etc.);
designing a real-time analysis algorithm (comprising complex event processing, time sequence analysis, prediction model and the like) aiming at the service requirement to analyze the third data so as to obtain a real-time analysis result;
Monitoring access flow and response time of the API interface, applying a dynamic flow adjustment algorithm (such as current limiting, load balancing and the like) based on feedback control, adjusting algorithm parameters according to real-time analysis results, and dynamically optimizing the response speed of the API.
The method and the device for monitoring the data in the data processing system further comprise the steps of pushing the real-time analysis result to an upper application system in a proper format, providing a real-time monitoring and alarming function, finding abnormal conditions in time, supporting the persistent storage and offline analysis of the result, meeting the requirements of historical data query, improving the expandability and elasticity of the system by adopting a distributed and micro-service architecture design, realizing automatic deployment and operation and maintenance by utilizing a container and arrangement technology, and improving the system performance by applying performance optimization technologies such as memory management, I/O optimization and the like.
By the scheme of the embodiment, intelligent analysis and processing can be rapidly and accurately carried out on real-time data streams, valuable insights and insights for business decision-making and optimization are dug out, network flow control algorithm parameters are dynamically adjusted according to real-time analysis results, quick response of an API interface under high concurrent access is guaranteed, a distributed and micro-service architecture design is adopted, horizontal expansion and automatic expansion are supported, fault tolerance and usability of a system are improved, requirements of different business scenes are met, automatic deployment and operation and maintenance are realized by means of container and arrangement technology, system stability is improved, artificial operation and maintenance cost is reduced, overall performance of the system is improved through the technologies of memory management, I/O optimization and the like, computing and storage resources are reasonably utilized, and resource utilization rate is improved. In a word, the scheme can fully utilize a real-time data flow analysis technology and a network flow regulation algorithm to carry out intelligent analysis and processing on third data, dynamically optimize the API response speed, provide quick and accurate decision support for an upper application system, and meet the requirements of the Internet of things application on instantaneity and expandability.
In some possible embodiments of the present invention, the step of extracting the first image data collected by the preset monitoring device from the second data includes:
Analyzing the received second data, and identifying and extracting an image data frame carrying a mark of a preset monitoring device as first image data;
format conversion and compression of the first image data (to reduce storage and transmission load) according to the traffic demand;
checking the integrity and validity (such as resolution, encoding format, etc.) of the extracted first image data;
performing basic enhancement and optimization processing (such as brightness/contrast adjustment, noise elimination, etc.) on the first image data;
the extracted first image data is stored in a lasting mode, and historical data query and analysis are supported;
A design data management module providing fast retrieval and access capabilities for the first image data;
According to the service requirement, data encryption and access control measures are implemented to ensure the integrity and confidentiality of the first image data, and the privacy information is protected.
In this embodiment, the setting of the monitoring device may be performed according to the following steps: determining the positions and the number of monitoring equipment to be deployed according to specific service scenes and application requirements; the purpose and intended effect of the monitoring is evaluated, such as security monitoring, equipment operation monitoring, environmental monitoring, etc.; according to the monitoring requirement, selecting a proper monitoring equipment type, such as an IP camera, an infrared detector, a temperature and humidity sensor and the like; selecting a product meeting the requirements by considering the factors of the functional characteristics, resolution, transmission protocol, power supply and the like of the equipment; determining the installation position of equipment, and considering factors such as coverage, light conditions, equipment damage risk and the like; the installation and connection are carried out according to the equipment description, so that the equipment can work normally; debugging and testing are carried out on the equipment, so that the monitoring equipment can normally collect data; accessing the monitoring equipment into a network to ensure that the equipment can be stably connected to a data center or a cloud platform; the appropriate network parameters are set up and, such as IP addresses, subnet masks, gateways, etc., ensure that the device can be accessed successfully; considering the network security requirements of the device, such as authentication, encrypted transmissions, etc.; a data interface is configured at a data receiving end (such as a data center or a cloud platform) to support data reporting of monitoring equipment; a suitable data transmission protocol is selected and, such as RTSP, ONVIF, MQTT, ensuring smooth data transmission; whether the test data receiving end can correctly receive and analyze the data from the monitoring equipment or not; a regular checking and maintaining mechanism of the monitoring equipment is established, so that the equipment can stably operate for a long time; the software upgrading and bug repairing of the equipment are concerned, and the safety of the equipment system is maintained; and (5) periodically backing up and archiving the monitoring data to ensure the integrity of the historical data. Through the steps, the preset monitoring equipment can be reasonably deployed and configured, and a stable and reliable data source is provided for subsequent data acquisition and analysis. Meanwhile, factors such as safety, privacy protection and the like of equipment are required to be considered, and legal compliance of monitoring behaviors is ensured.
The scheme of the embodiment can receive video stream data from monitoring equipment in real time and reliably, support various video protocols and coding formats, meet the requirements of different equipment and systems, extract image data frames from a second data stream quickly and accurately, perform necessary format conversion and compression processing on the extracted image data, reduce storage and transmission loads, ensure the integrity, clearness and usability of the extracted image data through data verification and basic processing, improve the reliability and usability of the image data, provide high-quality data sources for subsequent applications, provide the functions of persistent storage and quick retrieval of the image data, support historical data query and analysis, meet service requirements, implement necessary security measures, protect the confidentiality and integrity of the image data, ensure the security of user privacy information and meet related regulation requirements. In a word, the scheme can reliably extract high-quality image data from the second data, provide valuable data sources for subsequent applications such as image analysis and object recognition, and simultaneously has good data management and security protection capabilities, and meets the requirements of the application of the Internet of things on data processing and privacy protection.
In some possible embodiments of the present invention, the steps of using the monitoring analysis module based on image recognition to perform intelligent analysis on the first image data based on image recognition and deep learning technology, identifying an abnormal situation and triggering early warning include:
Performing preprocessing operations of format conversion, scale adjustment and color correction on the first image data to obtain second image data;
acquiring historical monitoring image data, and constructing an image sample data set (including image samples of normal conditions and abnormal conditions) according to the historical monitoring image data;
Selecting a proper deep learning model architecture (such as Convolutional Neural Network (CNN), a target detection model and the like);
In this step, a suitable deep learning model architecture, in particular a Convolutional Neural Network (CNN) and a target detection model, may be chosen in view of the task requirements (different deep learning models are suitable for different tasks, e.g. CNN is particularly suitable for image classification, feature extraction etc., whereas target detection models (e.g. YOLO, faster R-CNN) are dedicated for identifying multiple objects in an image and their locations), data types (if image data are processed, convolutional Neural Network (CNN) is capable of capturing spatial features in an image effectively due to its local connections and weight sharing characteristics, for tasks requiring simultaneous identification of multiple objects and determination of their location, target detection model is a more suitable choice), model complexity (CNN is relatively simple, suitable for underlying image processing tasks, target detection model is more complex, generally requires higher computational resources and training time, but can provide more abundant information), performance requirements (in real-time monitoring etc. a fast, efficient model (e.g. YOLO series) is critical, in a fast-efficient model is selected, in a context requiring high-precision, a better in view of the need of computational resources (e.g. Faster R-N) is likely to be used in view of the computational resources (e.g. demand) and scalability is more flexible in view of the task (CNN) is likely to be chosen by taking into account, e.g. a model is more flexible in a low-level of its performance (e.g. demand) and a model is likely to be used for computing a better in a low-quality model is suitable for computing, whether or not transfer learning can be conveniently performed or used in combination with other models), etc., the deep learning model architecture suitable for specific application scenes and requirements can be better selected through understanding the aspects, thereby improving the overall performance and effect of the system.
Performing supervised learning on the model by using the image sample data set, and optimizing model parameters;
Through model evaluation and tuning, the recognition accuracy and generalization capability of the model are improved;
deploying the trained deep learning model into a monitoring analysis module;
defining judgment rules of abnormal conditions (such as object invasion, equipment fault, environment abnormality and the like);
carrying out intelligent analysis on the second image data by utilizing a deep learning model in combination with a judging rule, and identifying whether an abnormal condition exists;
When an abnormal condition exists, an early warning mechanism (such as sending alarm information, recording event logs and the like) is triggered;
and classifying and evaluating the severity of the abnormal situation according to the service requirement.
The method and the device can further comprise the steps of feeding back related data (images, detection results and the like) of the abnormal event to manual analysis and auditing, continuously optimizing and iterating the model according to the manual analysis results, improving accuracy and reliability, periodically evaluating the performance of the model, and adjusting and optimizing according to service requirements.
According to the scheme, the image data can be automatically analyzed and detected in an abnormal mode by means of a deep learning technology, the intelligent level of a monitoring system is greatly improved, the workload of manual examination is reduced, abnormal conditions in the image data such as invasion, faults and abnormality can be detected in real time, a rapid triggering early warning mechanism is used for providing timely abnormal event notification for management staff, the accuracy and reliability of monitoring analysis are continuously improved through manual feedback and model iteration, the change of service requirements is adapted, a monitoring analysis module is continuously optimized and upgraded, time and labor cost required by manual monitoring are reduced, the overall efficiency of the monitoring system is improved, more accurate and timely abnormal event information is provided for the management staff, decision support capability is improved, abnormal conditions are timely found and early warned, the overall safety precaution capability is improved, and all-round safety guarantee is provided for an Internet of things system by combining means such as video monitoring. In a word, the scheme can fully utilize the deep learning technology, realize automatic intelligent monitoring analysis, and greatly improve the safety and operation and maintenance efficiency of the application of the Internet of things. Meanwhile, reliable basic support is provided for subsequent data analysis, event early warning and the like.
In some possible embodiments of the present invention, the step of generating the abnormal event description data according to the preset abnormal event model and storing the abnormal event description data according to the result of identifying the abnormal situation includes:
Determining abnormal event basic information (the abnormal event basic information includes but is not limited to the type, the occurrence time, the occurrence place, the event process description, the abnormal event related multimedia data and the like of the abnormal event);
Converting the identified basic information of the abnormal event into standardized abnormal event description data according to a predefined abnormal event model;
the abnormal event description data comprises multimedia contents such as event ID, event type, occurrence time, occurrence place, abnormal situation description and related images, videos, sensor data and the like;
and storing the generated abnormal event description data into a preset database or file system.
The method comprises the steps of achieving basic operations such as adding, deleting, changing and checking abnormal event data, meeting daily management requirements, designing a proper data storage structure and an index mechanism according to business requirements, improving data retrieval efficiency, implementing safety protection measures such as encryption storage, authority control and the like on the abnormal event data, guaranteeing data safety, setting proper data access authorities according to requirements of different roles, preventing unauthorized access, establishing a log recording mechanism, tracking access and operation conditions of the data, describing the data by using the abnormal event, carrying out deep abnormal analysis and event association by combining other monitoring data, formulating targeted corresponding measures such as triggering early warning notification, automatic processing, manual intervention and the like according to analysis results, and continuously perfecting abnormal event recognition and processing mechanisms through continuous data analysis and feedback optimization.
According to the scheme, structured abnormal event description data can be generated, subsequent storage, query and analysis are facilitated, readability and treatability of abnormal event information are improved, the abnormal event data are subjected to persistent storage, a complete abnormal event history record is constructed, reliable data support is provided for subsequent data analysis, tracing and auditing, safety and privacy of the abnormal event data are guaranteed, data leakage and illegal access are prevented, reasonable data access authority management is achieved according to requirements of different roles, the abnormal event data are utilized, deep abnormal analysis and event association are conducted by combining with other monitoring data, more accurate and comprehensive abnormal event information is provided for management personnel, rapid decision and response are supported, through continuous optimization, recognition and treatment capacity of the abnormal event is continuously improved, continuous storage and analysis of the abnormal event are facilitated, system loopholes are found, monitoring strategies are optimized, and more valuable data support is provided for operation and maintenance management of a monitoring system. In a word, the scheme can convert the result of abnormal event identification into standardized data description, and perform persistent storage and safety management, thereby providing a reliable basis for subsequent data analysis and application. And meanwhile, the overall reliability and the operation and maintenance efficiency of the monitoring system can be improved.
Referring to fig. 2, another embodiment of the present invention provides a data management system for an API docking platform of the internet of things, including an internet of things device, a monitoring device, and a management server provided with an adaptive data format conversion module, a semantic analysis module, a data storage module, a real-time data stream analysis module, and a monitoring analysis module;
The management server is configured to:
Acquiring first original data from Internet of things equipment, and performing unified standardization processing on the first original data through a self-adaptive data format conversion module to obtain first data;
carrying out semantic analysis on the normalized first data by utilizing a semantic analysis module based on a machine learning technology, automatically generating a data element model and constructing a knowledge graph;
The first data, the data meta-model and the knowledge graph are stored and traced by using a blockchain technology through a blockchain-based data storage module, so that verifiability and non-tamper resistance of the first data are ensured;
acquiring first real-time data from Internet of things equipment, and performing unified standardization processing on the first real-time data through a self-adaptive data format conversion module to obtain second data;
analyzing the second data by using the data meta-model and the knowledge graph to obtain third data;
The real-time data flow analysis module adopts a real-time data flow analysis technology to carry out intelligent analysis and processing on the third data, and dynamically optimizes the API response speed by combining a network flow regulation algorithm;
Extracting first image data acquired by preset monitoring equipment from the second data;
Using a monitoring analysis module based on image recognition, performing intelligent analysis on the first image data based on image recognition and deep learning technology, recognizing abnormal conditions and triggering early warning;
Generating abnormal event description data according to a preset abnormal event model according to the result of identifying the abnormal condition, and storing the abnormal event description data.
It should be noted that the block diagram of the data management system for the API docking platform of the internet of things shown in fig. 2 is only illustrative, and the number of the illustrated modules does not limit the scope of the present invention. The data management system for the API docking platform of the internet of things provided in this embodiment may be used to execute each embodiment scheme of the corresponding data management method for the API docking platform of the internet of things, and in the specific implementation process, please refer to the description of each method embodiment, which is not repeated herein.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present application is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required for the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, such as the above-described division of units, merely a division of logic functions, and there may be additional manners of dividing in actual implementation, such as multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, or may be in electrical or other forms.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units described above, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a memory, comprising several instructions for causing a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the above-mentioned method of the various embodiments of the present application. The Memory includes a U disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, etc. which can store the program codes.
Those of ordinary skill in the art will appreciate that all or part of the steps in the various methods of the above embodiments may be implemented by a program to instruct related hardware, where the program may be stored in a computer readable Memory, and the Memory may include a flash disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk or an optical disk, etc.
The foregoing has outlined rather broadly the more detailed description of embodiments of the application, wherein the principles and embodiments of the application are explained in detail using specific examples, the above examples being provided solely to facilitate the understanding of the method and core concepts of the application; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.
Although the present invention is disclosed above, the present invention is not limited thereto. Variations and modifications, including combinations of the different functions and implementation steps, as well as embodiments of the software and hardware, may be readily apparent to those skilled in the art without departing from the spirit and scope of the invention.