Detailed Description
In the present disclosure, unless otherwise specified, the use of the terms "first", "second", etc. to describe various elements is not intended to limit the positional relationship, the timing relationship, or the importance relationship of the elements, and such terms are used only to distinguish one element from another. In some examples, a first element and a second element may refer to the same instance of the element, and in some cases, based on the context, they may also refer to different instances.
The terminology used in the description of the various described examples in this disclosure is for the purpose of describing particular examples only and is not intended to be limiting. Unless the context clearly indicates otherwise, if the number of elements is not specifically limited, the elements may be one or more. Furthermore, the term "and/or" as used in this disclosure is intended to encompass any and all possible combinations of the listed items.
Embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings.
Fig. 1 illustrates a schematic diagram of an exemplary system 100 in which various methods and apparatus described herein may be implemented in accordance with embodiments of the present disclosure. Referring to fig. 1, the system 100 includes one or more client devices 101, 102, 103, 104, 105, and 106, a server 120, and one or more communication networks 110 coupling the one or more client devices to the server 120. Client devices 101, 102, 103, 104, 105, and 106 may be configured to execute one or more applications.
In embodiments of the present disclosure, the server 120 may run one or more services or software applications that enable the method of content processing.
In some embodiments, the server 120 may also provide other services or software applications that may include non-virtual environments and virtual environments. In certain embodiments, these services may be provided as web-based services or cloud services, for example, provided to users of client devices 101, 102, 103, 104, 105, and/or 106 under a software as a service (SaaS) model.
In the configuration shown in fig. 1, server 120 may include one or more components that implement the functions performed by server 120. These components may include software components, hardware components, or a combination thereof, which may be executed by one or more processors. A user operating a client device 101, 102, 103, 104, 105, and/or 106 may, in turn, utilize one or more client applications to interact with the server 120 to take advantage of the services provided by these components. It should be understood that a variety of different system configurations are possible, which may differ from system 100. Accordingly, fig. 1 is one example of a system for implementing the various methods described herein and is not intended to be limiting.
A user may process content using client devices 101, 102, 103, 104, 105, and/or 106. The client device may provide an interface that enables a user of the client device to interact with the client device. The client device may also output information to the user via the interface. Although fig. 1 depicts only six client devices, those skilled in the art will appreciate that any number of client devices may be supported by the present disclosure.
Client devices 101, 102, 103, 104, 105, and/or 106 may include various types of computing systems, such as portable handheld devices, general purpose computers (such as personal computers and laptop computers), workstation computers, wearable devices, gaming systems, thin clients, various messaging devices, sensors or other sensing devices, and so forth. These computing devices may run various types and versions of software applications and operating systems, such as Microsoft Windows, Apple iOS, UNIX-like operating systems, Linux, or Linux-like operating systems (e.g., Google Chrome OS); or include various Mobile operating systems, such as Microsoft Windows Mobile OS, iOS, Windows Phone, Android. Portable handheld devices may include cellular telephones, smart phones, tablets, Personal Digital Assistants (PDAs), and the like. Wearable devices may include head mounted displays and other devices. The gaming system may include a variety of handheld gaming devices, internet-enabled gaming devices, and the like. The client device is capable of executing a variety of different applications, such as various Internet-related applications, communication applications (e.g., email applications), Short Message Service (SMS) applications, and may use a variety of communication protocols.
Network 110 may be any type of network known to those skilled in the art that may support data communications using any of a variety of available protocols, including but not limited to TCP/IP, SNA, IPX, etc. By way of example only, one or more networks 110 may be a Local Area Network (LAN), an ethernet-based network, a token ring, a Wide Area Network (WAN), the internet, a virtual network, a Virtual Private Network (VPN), an intranet, an extranet, a Public Switched Telephone Network (PSTN), an infrared network, a wireless network (e.g., bluetooth, WIFI), and/or any combination of these and/or other networks.
The server 120 may include one or more general purpose computers, special purpose server computers (e.g., PC (personal computer) servers, UNIX servers, mid-end servers), blade servers, mainframe computers, server clusters, or any other suitable arrangement and/or combination. The server 120 may include one or more virtual machines running a virtual operating system, or other computing architecture involving virtualization (e.g., one or more flexible pools of logical storage that may be virtualized to maintain virtual storage for the server). In various embodiments, the server 120 may run one or more services or software applications that provide the functionality described below.
The computing system in server 120 may run one or more operating systems including any of the operating systems described above, as well as any commercially available server operating systems. The server 120 may also run any of a variety of additional server applications and/or middle tier applications, including HTTP servers, FTP servers, CGI servers, JAVA servers, database servers, and the like.
In some implementations, the server 120 may include one or more applications to analyze and consolidate data feeds and/or event updates received from users of the client devices 101, 102, 103, 104, 105, and 106. Server 120 may also include one or more applications to display data feeds and/or real-time events via one or more display devices of client devices 101, 102, 103, 104, 105, and 106.
The system 100 may also include one or more databases 130. In some embodiments, these databases may be used to store data and other information. For example, one or more of the databases 130 may be used to store information such as audio files and video files. The data store 130 may reside in various locations. For example, the data store used by the server 120 may be local to the server 120, or may be remote from the server 120 and may communicate with the server 120 via a network-based or dedicated connection. The data store 130 may be of different types. In certain embodiments, the data store used by the server 120 may be a database, such as a relational database. One or more of these databases may store, update, and retrieve data to and from the database in response to the command.
In some embodiments, one or more of the databases 130 may also be used by applications to store application data. The databases used by the application may be different types of databases, such as key-value stores, object stores, or regular stores supported by a file system.
The system 100 of fig. 1 may be configured and operated in various ways to enable application of the various methods and apparatus described in accordance with the present disclosure.
The flow of an example content processing method 200 according to an embodiment of the present disclosure is described below in conjunction with fig. 2.
At step S201, the access stratum acquires a first request including content data and a processing command for the content data.
At step S202, the access stratum sends a second request to the service stratum in response to the source of the first request satisfying the security index. The second request includes the content data and the processing command, and the second request does not include the source of the first request.
At step S203, the service layer processes the content data according to the processing command.
According to the method, the access layer can receive the data subjected to the primary processing, perform security verification on the data and then transmit the data to the service layer for calculation. The access layer does not need to pass the data source to the service layer, which does not need to know the source of the data. Therefore, functional decoupling between modules can be achieved, and workload required by updating iteration is simplified. The security index may be a determination of whether the source of the first request (e.g., a predecessor module such as a front-end or presentation layer in the case of an internal request, or a gateway in the case of an external request, etc.) has access to the service layer based on security requirements required by the service, etc. By splitting the security check function, only a small number of required interfaces need to be subjected to security check, and resources required by calculation are reduced.
The above method may be applied to a content processing scenario. The content data may include text content, picture content, audio content, video content, and may include topics, news, themes, events, and the like. The processing of the content data may include editing, cropping, version replacement, retouching, augmenting, etc. of various types of content, and the present disclosure is not limited thereto, the content data may include any data capable of carrying content and capable of being processed, and the processing command may refer to any desired corresponding manner of processing the content data presented in a certain format. The content data may be processed to obtain the required data.
For example, content processing may include processing event content to expose hot events, identifying timeliness, scarcity of events, richness of related content with full network crawling capabilities. The content processing may include extracting keywords from the text and the title, generating the title from the keywords, and generating or searching a corresponding copyright map, dynamic topics, encyclopedia, background materials and the like from the text, the title, the keywords and other text contents. Content processing may include detecting whether a picture contains undesirable features such as low definition, solid color, advertising, mosaic, etc., detecting whether text has mispronounced words, whether a paragraph is properly formatted and optionally modified, detecting whether a video is blurred and optionally processed or searched or generates similar videos, etc.
Further, the above method may be combined with artificial intelligence, further applied in an intelligent writing assistant or an assisted writing product to process content data automatically or following user preferences without requiring a user to manually select a process command or the like. For example, content processing functionality may surround different stages of author authoring and, with artificial intelligence computing capabilities, assist authors in completing better articles. The following functions may be implemented with intelligent content processing: by displaying the hot event and processing the content of the event, the direction of the author for selecting the questions can be guided before the author creates the articles, the author is guided to create the non-homogeneous articles, and the reading amount of the articles is improved; rich written material can be provided in authoring: the generated related contents are automatically displayed in the corresponding area of the editor according to the contents such as the text, the title, the key words and the like input by the author, so that the times of jumping out of the page by the author during the writing process to automatically inquire the materials are reduced, and the high efficiency of the author creation is ensured; the method can detect the creation quality of the article after creation, remind the author of modifying and perfecting in advance in the editing stage, and improve the final release quality of the article.
According to some embodiments, the content data may represent a picture. The corresponding processing command may be at least one of: obtaining keywords related to the picture, cutting the picture, rotating the picture, performing color processing on the picture, obtaining a related picture of the picture, obtaining a higher-definition version of the picture, and obtaining a copyrighted picture similar to the picture. The processing command is a typical processing command for obtaining more desirable content data for picture content data, and can significantly improve user experience. The processing command may not be limited to the above-described commands, and other processing commands that satisfy the user's desire or the artificial intelligence calculation index are possible. For example, the processing command may include replacing a person in the picture, adjusting an atmosphere of the picture, and the like.
According to some embodiments, the content data may represent text. The corresponding processing command may be at least one of: and carrying out error correction processing on the text, acquiring keywords related to the text content, and acquiring topics related to the text content. Text processing is an important component in content processing. The processing command is a typical processing command for obtaining more desirable content data for text content data, and can significantly improve user experience. The processing command may not be limited to the above-described commands, and other processing commands that satisfy the user's desire or the artificial intelligence calculation index are possible. For example, processing commands may include extending, copying, rewriting text, and the like.
According to some embodiments, the content data may represent a video. The corresponding processing command may be at least one of: the method comprises the steps of obtaining keywords related to the video, cutting the video, performing rotation processing on the video, performing color processing on the video, obtaining related video of the video and obtaining a higher definition version of the video. Videos tend to carry a large amount of content and require large processing resources. The processing command aims at the video content data, and the user experience can be remarkably improved.
According to some embodiments, the content data may represent an event. The corresponding processing command may be at least one of: obtaining keywords related to the event, obtaining scarcity of the event, obtaining topicality of the event, obtaining pictures related to the event, obtaining videos related to the event and obtaining articles related to the event. The event is used as atypical content data, and the processing of the event can extract key points, popularity, value and the like of the event in an intelligent manner, is beneficial to the experience of a user, and is particularly useful in the context that the user needs to author on the basis of time. For example, the keywords may represent core features of an event, the obtained scarcity degree may represent whether the event has related articles and click rate of the articles in the network, the topicality degree may represent whether related authoring brings higher value, and related pictures or videos are beneficial to reducing manual search of a user when authoring is performed based on the event, so that resources are saved and efficiency is improved.
If the codes of the content processing functions are coupled in the same module, especially all the functions of flow source filtering, service logic and verification, database connection, network calling and the like are completed in the same module, the front-end request directly reaches the module, and the functions cannot be well distinguished. Coupling the codes in the same module also leads to slow development and iteration efficiency, and small changes can also affect the functions of other modules, so that the regression cost of the online test at each time is high. In addition, if a simple retry polling strategy is adopted for a long and time-consuming interface, the strategy service is time-consuming, machine resources at the back end are wasted, and time consumption of the whole process needs to be reduced to ensure user experience. Based on this, the design scheme of the service architecture is provided aiming at combining product functions and service dependence, the development efficiency of codes is improved, the time-consuming experience of users is optimized, and appropriate safety verification is provided.
Fig. 3 is an example architecture of a content processing system 300 according to an embodiment of the present disclosure.
Content processing system 300 may include an access layer 301 and a service layer 302:
the access stratum 301 is configured to obtain the first request. The first request includes content data and a processing command for the content data. The access stratum 301 is further configured to send a second request to the service stratum 302 in response to the source of the first request satisfying the security metric. The second request includes the content data and the processing command, and the second request does not include the source of the first request.
The service layer 302 is configured to process the content data according to the processing command.
Referring again to fig. 3, access stratum 301 may obtain the first request from presentation stratum 311 or from gateway 321. The request from the presentation layer may characterize the request from the local, internal network, or the product line. The request from the gateway 321 may characterize the request from an external source, such as an external network or other product. In some embodiments, content processing system 300 may include presentation layer 311. The presentation layer is configured to parse the front-end request to obtain content data and a processing command; and sending the first request to the access stratum. Therefore, the access layer can acquire the preprocessed request command, and further realize decoupling and lightweight realization of the module. In other examples, there may be no presentation layer, or the functionality of the presentation layer described herein may be incorporated into a front end or the like, and any other method of obtaining content data and processing commands by the access layer is applicable here.
For long time consuming feature calculations, the connection between the access stratum and the service stratum may have been closed due to a timeout. At this time, a producer-consumer model may be adopted, the service layer publishes the calculation result of the corresponding picture to an asynchronous message queue, the message queue notifies the presentation layer, and the presentation layer puts the calculation result of the picture feature into a cache of the presentation layer after receiving the message. The presentation layer can check the results in the cache and respond directly when the front-end next retries or polls. Therefore, no additional service layer is required to be requested, and the time consumption and the expense of the network are reduced.
In some embodiments, the service layer 302 is further configured to create a message queue in response to the second request timing out after processing the content data in accordance with the processing command to generate the result data. Service layer 302 is also configured to send the result data to the presentation layer through a message queue. Thus, in the case where the amount of calculation is large or the network is busy, for example, if the request times out, a message queue is established and the result data is directly transmitted to the presentation layer. Since the service layer does not need to know the source of the data, it only needs to follow simple logic of issuing to the message queue if it times out and returning if it does not. The message queue is established by the service layer after the connection is closed. Since the traffic from the outside has a single network request upper limit, there is no timeout problem and thus no message queue. Therefore, polling of a long time-consuming interface is not needed, computing resources are saved, and computing stability is improved.
In some embodiments, the method described in conjunction with fig. 2 may be carried out on the access stratum 301 side of fig. 3.
Some alternative embodiments of the content processing method described in fig. 2 will be described below.
In accordance with some embodiments, the first request further includes a service interface identification, and the access layer sending the second request to the service layer includes the access layer sending the second request to a service interface in the service layer. The service interface identification can indicate a service interface corresponding to the content data and the processing command in the service layer, such as a debit for calling a picture processing module or an interface for calling a video processing module. The service interface is obtained from the first request and the request parameters are forwarded directly to the corresponding interface, thereby obviating further computation of the request and requiring only forwarding of the request according to the routing rules. Thus, proper distribution of functions is achieved, the amount of computation is reduced, and decoupling between modules can be further achieved.
The extraction of the service interface identification described above may be implemented by means of a presentation layer. According to the architecture of some embodiments of the present disclosure, the presentation layer is the first module to interact directly with the front-end. That is, in addition to processing the front-end request to parse the data and processing commands of the obtained request, the presentation layer may perform preliminary processing on the request parameters from the front-end to select the service layer capabilities to be invoked, so that the access layer no longer needs to process the request parameters, but is only responsible for passing the data incoming from the presentation layer to the corresponding service layer based on the capabilities that the presentation layer has already selected.
According to some embodiments, the content data satisfies at least one of: the content data is in a format suitable for processing commands, the content data being from a user having processing rights, or the content data being from a logged-in user. This means that the content data has undergone preliminary data verification before being processed by the method 200. Therefore, the decoupling of the modules and the lightweight realization of each functional module can be further realized.
The above process may be implemented by the presentation layer 311 as described in fig. 3. For example, the presentation layer may preliminarily check the content data, determine that the content data is in a format suitable for a processing command (e.g., a processing command for picture cropping, the content data is in a. JPG or. BMP format, etc.), determine that the content data is from a logged-in user (e.g., determine a user login status from a front-end request), or the content data is from a user with processing authority, etc. The presentation layer can also be used to complete the processing of validity check of request parameters, front-end response field formatting, data log dotting, and the like. The data log dotting refers to the amount of clicking of a certain function in a certain interface, or the statistic of a certain state and the like. A data log file may be added at a designated location of the presentation layer. For some checks strongly coupled with the service, such as user login state, user right, and the like, the data of the corresponding user can be acquired at the presentation layer and the check is completed. The filtering of whether the request parameter is empty, whether the request parameter is the required array type and the like can also be finished at the presentation layer. Of course, the present disclosure is not limited to scenarios in which data is received from presentation layer 311, nor does it require that a separate module of presentation layer or similar functionality be necessarily included in system 300. For example, the above processing may be realized by a front end, and the like. Alternatively, in a scenario where the first request is received from the gateway 321, for example, the interface may be defined in advance, so that the request from the external source needs to follow a certain rule or the data from the external source needs to be processed in advance to access the access stratum.
According to some embodiments, the content data is an intranet stored version of the content or a link to a location in the intranet where the content to be processed is stored. Such a local storage version can improve computational efficiency. For example, such processing may also be implemented by the presentation layer 311 as described in fig. 3, but as described above, the present disclosure is not limited thereto.
In addition, when interacting with the request of the front end, there are some personalized field processes for facilitating the front end to display and separate, for example, the front end is used for distinguishing the types of resources of the template, or splicing the documentations, etc. These processes can also be done by the presentation layer or presentation layer in conjunction with the front end, since they do not involve field output for core functionality.
According to some embodiments, before sending the second request to the service layer, checking by the access layer whether a corresponding port of the service layer is available. Sending the second request includes sending the second request to the service layer in response to the status of the corresponding port of the service layer being available. The available state is checked before invoking the services of the service layer. Therefore, calling of overtime or fault service layer machines can be avoided, and computing efficiency and stability are improved.
The above-described process may be implemented by, for example, access stratum 301 of fig. 3. The access stratum may check the availability status of the machines of the service stratum. For example, if more than a certain number of connection failures occur, the service of the machine is deemed unavailable and the next requested traffic is distributed to other machines.
The access stratum is also capable of distributing traffic among different ports and machines of the service stratum. For example, the access stratum may distribute traffic among different machines or ports according to a random policy to avoid excessive traffic on the same machine. According to some embodiments, sending the second request comprises sending the second request to the service layer based on the flow control indicator. The flow control indicator may be a requested or calculated per port quantity based on a machine, a calculated quantity, or a predetermined allocation policy. The access layer can implement flow control and load balancing. Therefore, on one hand, decoupling of the modules can be achieved, on the other hand, the operation stability of the system can be improved, and the problems of flow overload, faults caused by the flow overload and the like are avoided.
The access layer is positioned above the service layer and can perform load balancing and route forwarding on all request flows. The access stratum can be connected to gateways in addition to the presentation stratum. The gateway is used for accessing calls from external sources. This portion of the traffic is accessed from the gateway when this function is invoked from an external source, such as another product line or team. The authority check of external traffic sources, source distinguishing, current limiting and other processing can be completed at the gateway. To prevent external traffic from including various network security issues, the gateway needs to control access rights and avoid overwhelming services with unexpected traffic, so it needs to check and limit the sources. The access stratum is capable of receiving requests from both the presentation stratum and the gateway. After receiving the request, the access layer may allocate the same interface to the service layer according to the same routing forwarding rule.
According to some embodiments, the method further comprises, after processing the content data according to the processing command to generate result data, in response to the second request not being timed out, sending, by the service layer, the result data to the access layer as a response to the second request; and returning, by the access stratum, the result data to the source of the first request. Therefore, the result data can be directly received from the service layer, and the service layer still does not need to know the request source and only needs to return, thereby saving the burden of the service layer. Based on the data source information stored in the access layer, for example, whether the request originally comes from the presentation layer or the gateway, the generated result data is transmitted back, so that an independent module and a convenient data transmission mechanism are realized.
According to some embodiments, the first request is received from the presentation layer, and the method further comprises creating a message queue between the service layer and the presentation layer in response to a second request timeout after processing the content data according to the process command to generate the result data. The result data is then pushed by the service layer to the message queue. Such a design takes the idea of setting up a message queue if the request times out. Because the request from the external source has the single access upper limit, the overtime problem can not occur, therefore, the source is necessarily the presentation layer when overtime occurs, at the moment, the source does not need to be judged, and the message queue is directly established between the service layer and the presentation layer. Thus, polling of long time-consuming interfaces is no longer required, saving computational resources per module, and increasing computational efficiency.
The service layer is a core part of the implementation of the content computation function. The service layer can be connected with a relational database, a full-text retrieval engine, a cache and the like, or other third-party services are called through a network to obtain more data. According to some embodiments, the service layer generates the result data by invoking at least one of: relational database, full text search engine, cache, third party service. The manner of generation of the resulting data. Thus, the desired content can be accurately processed. The service layer may provide different responses depending on different content types. For example, for picture content, the characteristics of the definition, the information entropy and the like of the picture can be calculated, and the result of each picture is marked according to a threshold value; the method can cut according to the expected picture size, compress and store the picture to generate a new link; the copyright map can be retrieved, similar pictures can be inquired in the database according to the labels of the pictures, and the copyright pictures are provided to avoid creation risks. For text content, wrongly written characters in long and short texts can be recognized and prompted to be corrected, keywords of the texts can be extracted as topics or titles, and related content can be searched in a database by using the keywords of the texts. For video content, video features can be calculated, whether the video is fuzzy or not can be checked, and a video link with higher definition can be generated. For the event content, the scarcity degree of the corresponding event in the whole network article can be searched, and the related article, the dynamic state, the video and the like of the event can be acquired. It will be appreciated that the functionality of the service layer is not so limited and may be responsive to any content data and corresponding processing commands.
According to some embodiments, the service layer is capable of batch processing the plurality of content data and the plurality of processing commands. The concurrent processing capacity of the service layer can realize mass processing of data, and the calculation efficiency is improved. The concurrent processing manner of the service layer will be described in more detail below with reference to fig. 4 (a). For example, logic concurrently processed in the service layer may be described as first processing features in parallel, and then processing content in parallel. The computational efficiency can be further increased by two-stage parallel processing. According to some embodiments, the batch processing comprises: establishing a plurality of coroutines for a plurality of processing features respectively corresponding to the plurality of processing commands, each coroutine of the plurality of coroutines corresponding to each processing feature of the plurality of processing features; processing the multiple coroutines in parallel; and within each of the plurality of coroutines, processing the plurality of content data in parallel for the corresponding processing feature.
Here, the process characteristics refer to attributes related to the process. For example, for picture processing, the feature may be image definition/presence or absence of two-dimensional code/picture size, or the like. May be sharpness, length, etc. for video. The text can be content attributes, wrongly written words, fluency, related topics, and the like. Computing the features prior to processing the content may more efficiently implement batch processing in two concurrent processes.
The following describes a concurrent processing manner of the service layer with reference to fig. 4(a) by taking picture calculation as an example. It is to be understood that the concurrent processing manner of the service layer is not limited to the computation of the picture content, and other types and formats of content processing and computation can be processed according to similar ideas and logic.
Interfaces in the service layer support multi-feature, multi-content bulk processing. In the batch process, a set of characteristics, respectively characteristic 1, characteristic 2 … … characteristic n, and a set of contents, respectively content 1, content 2 … … content m, are specified in the input parameters. For example, the features 1 to n may be features corresponding to cropping processing, sharpness processing, acquisition of copyrighted similar pictures, extraction of keywords, and the like, respectively. For example, content 1 to content m may be m different pictures that need to undergo the processing in the aspect of feature 1 to feature n described above, respectively. Of course, the content is not limited to pictures, and fig. 4(b) shows an example functional schematic of a service layer according to an embodiment of the present disclosure.
When a plurality of features are concurrently processed, the time difference for activating the plurality of processing elements in sequence is different for each feature among them, and constitutes a loop, which is referred to herein as "first loop C1". After the end of the cycle is opened concurrently, multiple features may be processed simultaneously by multiple processing units. For example, a plurality of processing units simultaneously and respectively process features such as cropping processing, sharpness processing, acquisition of copyrighted similar pictures, extraction of keywords, and the like. Specifically, in the first loop C1, a coroutine is opened for each feature for processing, and each feature may be considered to begin concurrent at the same point in time. Next, a plurality of contents are concurrently processed, and a time difference of a time point at which each of the contents is turned on constitutes one loop, which is referred to herein as "second loop C2". The second loop is ended and the plurality of contents are concurrently processed for each feature by a corresponding processing unit of the plurality of processing units. Similarly, in the second loop, within each feature, a coroutine is opened for each content in the batch process for computation. Thus, content within each feature may also be considered concurrent. For example, a cropping operation is performed in parallel on a plurality of pictures 1 to m by the first processing unit, an operation of acquiring copyrighted similar pictures is performed in parallel on a plurality of pictures 1 to m by the second processing unit, and the like, and these processing units may also be performed concurrently.
If the calculation fails in some content of some characteristic, the corresponding coroutine is closed independently to discard the result, so that the processing of other coroutines is avoided being influenced. And after the result calculation of all the contents of all the characteristics is completed, storing the result into a cache corresponding to the service layer. Optionally, the result may be formatted according to the specification before being stored in the cache corresponding to the service layer. Thus, when a request for the same content is received next time, the calculation results in the cache can be directly multiplexed. Thereafter, if the access stratum has not requested the closing of the connection, the calculation is responded directly to this connection. If the access stratum request has been closed because of a timeout, the computation may be responded to by way of the message queue described above.
The flow of an example content processing method 500 according to another embodiment of the present disclosure is described below in conjunction with fig. 5 (a).
At step S501, a front-end request is obtained by the front-end and transmitted to the presentation layer. For example, pending content may be sent with the front end request, or pending content or a link thereto may be included within the front end request, and the disclosure is not limited to messaging herein.
At step S502, the presentation layer parses the request from the front end to obtain the content data and the processing command, and sends a first request to the access layer. The data passed to the access stratum by the first request may include content data (e.g., a local link to save a content data file) and processing commands, etc. (e.g., whether to crop or to sharpness process). Parsing of the request from the front end by the presentation layer may also include preliminary processing of the content data. For example, the preliminary processing may include saving content to be processed (e.g., a picture file, etc.) to an intranet, and reading an intranet link of the saved file as content data. The processing of the content data by the presentation layer may include performing normalization (or legitimacy) checking of fine-grained request parameters, including checking whether the incoming content is in a legal format (e.g., for pictures, jpg format, gif format, etc.), checking request parameters, checking whether the user is a legitimate user (whether the user is logged in, user authority, etc.). The data passed to the access stratum by the first request may include a data source. The data source may indicate to the access stratum that the request is from a "presentation stratum" rather than an external source or external gateway. The data passed to the access stratum by the first request may optionally further comprise a service interface identification characterizing the service interface in the service stratum that needs to be invoked. That is, the presentation layer also performs preliminary processing on the request parameters from the front end to select the service layer capabilities to be invoked, so that the access layer no longer needs to process the request parameters, but is only responsible for passing data incoming from the presentation layer to the corresponding service layer based on the capabilities that the presentation layer has already selected.
At step S503, the access stratum receives the first request, and sends a second request to the service stratum based on the forwarding index being satisfied. The second request may include content data and a process command for processing by the service layer. For example, the access stratum may forward content data and processing commands in a request to a corresponding interface of the service stratum based on predetermined routing rules or may forward the content data and processing commands in the request to the corresponding interface of the service stratum based on a service interface identification in the request representing the stratum. There is no need to forward the source information etc. in the request to the service layer. The forwarding index in step S503 may include a security check and a flow control index. The access stratum may determine whether a data source, i.e., presentation layer or external source gateway, has access to the service layer based on the source. The access layer may distribute and equalize traffic from different sources and to different service layer ports. The access layer may transmit data including the content and the processing parameter to an interface corresponding to the service layer in response to the security check and the flow control indicator being satisfied.
At step S504, the service layer receives the second request, processes the content data based on the processing command, and generates result data. The service layer may process the content data by calling various databases or external functions. The service layer may have a variety of different processing capabilities. For example, for picture content data, a cropped picture or a higher definition picture may be generated as result data; for text content data, corrected text, keywords, related topics, and the like may be generated as result data. It is to be understood that the present disclosure is not limited thereto.
At step S505, it is determined whether the request has timed out.
If the request is timed out, the step proceeds to S506, and a message queue is established between the service layer and the presentation layer. The service layer pushes the result data to the message queue. At step S507, the presentation layer reads the result from the message queue and saves it to the cache. Next, at step S508, the presentation layer sends the results in the buffer back to the front end in response to receiving a poll from the front end again.
Thereafter, turning to step S509, the data is processed and presented by the front end, and the content processing procedure ends.
If the request is not timed out, the step proceeds to S510, and the service layer will directly transmit the result data back to the access stratum in the present request, for example, as a response to the second request. And then, the access layer returns result data by combining the data source information. For example, the access layer, in combination with data source information stored in itself, characterizes whether the request and corresponding data content originate from the presentation layer or the gateway, and passes the data back to the corresponding presentation layer or gateway. In the case of data being passed back to the presentation layer, the presentation layer passes the data back to the front end. The presentation layer does not need to process the data any more here, or the presentation layer may perform some processing that does not involve splicing, presentation, personalization, etc. of the core functionality. If the data is transmitted back to the gateway, the resulting data is forwarded by the gateway to the corresponding network device in any way that will be readily understood by the person skilled in the art, and the data flow-through process in the case of the gateway is omitted here. Thereafter, the process may go to step S509, where the front end processes and presents the data, and the content processing process ends.
By means of the decoupling module, different functional areas are divided into a presentation layer, a service layer and the like, each layer is respectively responsible for independent functional points, mutual involvement in the same module is avoided, and meanwhile each layer can be upgraded independently. The whole process can more normatively control the flow source and forwarding, and can provide an interface and capability after abstraction of a service layer to the outside as a service-oriented middle station. The iteration efficiency is improved, the research and development cost is reduced, and the time consumption of an interface is optimized. Fig. 5(b) shows an example architecture diagram of a content processing system according to another embodiment of the present disclosure.
A content processing apparatus 600 according to another aspect of the present disclosure is described below in conjunction with fig. 6. The content processing apparatus 600 may include an access layer 601 and a service layer 602. The access stratum 601 may be configured to obtain a first request by the access stratum, the first request including content data and a processing command for the content data. The service layer 602 may be configured to send a second request to the service layer in response to the source of the first request satisfying a security metric. And the service layer is used for processing the content data according to the processing command. The second request includes the content data and the processing command, and the second request does not include the source of the first request.
According to another aspect of the present disclosure, there is also provided a computing device, which may include: a processor; and a memory storing a program comprising instructions which, when executed by the processor, cause the processor to perform the content processing method described above.
According to still another aspect of the present disclosure, there is also provided a computer-readable storage medium storing a program, which may include instructions that, when executed by a processor of a server, cause the server to perform the above-described content processing method.
Referring to fig. 7, a block diagram of a computing device 700, which may be a server or a client of the present disclosure, which is an example of a hardware device that may be applied to aspects of the present disclosure, will now be described.
Computing device 700 may include components connected to bus 702 (possibly via one or more interfaces) or in communication with bus 702. For example, computing device 700 may include a bus 702, one or more processors 704, one or more input devices 706, and one or more output devices 708. The one or more processors 704 may be any type of processor and may include, but are not limited to, one or more general purpose processors and/or one or more special purpose processors (e.g., special purpose processing chips). The processor 704 may process instructions for execution within the computing device 700, including instructions stored in or on a memory to display graphical information for a GUI on an external input/output apparatus (such as a display device coupled to an interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple computing devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). Fig. 7 illustrates an example of a processor 704.
Input device 706 may be any type of device capable of inputting information to computing device 700. The input device 706 may receive input numeric or character information and generate key signal inputs related to user settings and/or functional controls of the content processing computing device, and may include, but is not limited to, a mouse, keyboard, touch screen, track pad, track ball, joystick, microphone, and/or remote control. Output device 708 may be any type of device capable of presenting information and may include, but is not limited to, a display, speakers, a video/audio output terminal, a vibrator, and/or a printer.
Computing device 700 may also include or be connected with a non-transitory storage device 710, which may be any storage device that is non-transitory and that may enable data storage, and may include, but is not limited to, a magnetic disk drive, an optical storage device, solid state memory, a floppy disk, a flexible disk, a hard disk, a magnetic tape, or any other magnetic medium, an optical disk or any other optical medium, a ROM (read only memory), a RAM (random access memory), a cache memory, and/or any other memory chip or cartridge, and/or any other medium from which a computer may read data, instructions, and/or code. The non-transitory storage device 710 may be removable from the interface. The non-transitory storage device 710 may have data/programs (including instructions)/code/modules (e.g., access stratum 601 and service stratum 602 shown in fig. 6) for implementing the above-described methods and steps.
Computing device 700 may also include a communication device 712. The communication device 712 may be any type of device or system that enables communication with external devices and/or with a network, and may include, but is not limited to, a modem, a network card, an infrared communication device, a wireless communication device, and/or a chipset, such as a bluetooth (TM) device, an 1301.11 device, a WiFi device, a WiMax device, a cellular communication device, and/or the like.
Computing device 700 may also include a working memory 714, which may be any type of working memory that can store programs (including instructions) and/or data useful for the operation of processor 704, and which may include, but is not limited to, random access memory and/or read only memory devices.
Software elements (programs) may be located in the working memory 714 including, but not limited to, an operating system 716, one or more application programs 718, drivers, and/or other data and code. Instructions for performing the above-described methods and steps may be included in one or more application programs 718, and the above-described methods may be implemented by the processor 704 reading and executing the instructions of the one or more application programs 718. Executable code or source code for the instructions of the software elements (programs) may also be downloaded from a remote location.
It will also be appreciated that various modifications may be made in accordance with specific requirements. For example, customized hardware might also be used and/or particular elements might be implemented in hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. For example, some or all of the disclosed methods and apparatus may be implemented by programming hardware (e.g., programmable logic circuitry including Field Programmable Gate Arrays (FPGAs) and/or Programmable Logic Arrays (PLAs)) in an assembly language or hardware programming language such as VERILOG, VHDL, C + +, using logic and algorithms according to the present disclosure.
It should also be understood that the foregoing method may be implemented in a server-client mode. For example, a client may receive data input by a user and send the data to a server. The client may also receive data input by the user, perform part of the processing in the foregoing method, and transmit the data obtained by the processing to the server. The server may receive data from the client and perform the aforementioned method or another part of the aforementioned method and return the results of the execution to the client. The client may receive the results of the execution of the method from the server and may present them to the user, for example, through an output device. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computing devices and having a client-server relationship to each other. The server may be a server of a distributed system or a server incorporating a blockchain. The server can also be a cloud server, or an intelligent cloud computing server or an intelligent cloud host with artificial intelligence technology.
It should also be understood that the components of computing device 700 may be distributed across a network. For example, some processes may be performed using one processor while other processes may be performed by another processor that is remote from the one processor. Other components of the computing device 700 may also be similarly distributed. As such, computing device 700 may be interpreted as a distributed computing system that performs processing at multiple locations.
Although embodiments or examples of the present disclosure have been described with reference to the accompanying drawings, it is to be understood that the methods, systems, and apparatus described above are merely exemplary embodiments or examples and that the scope of the present disclosure is not limited by these embodiments or examples, but only by the claims as issued and their equivalents. Various elements in the embodiments or examples may be omitted or may be replaced with equivalents thereof. Further, the steps may be performed in an order different from that described in the present disclosure. Further, various elements in the embodiments or examples may be combined in various ways. It is important that as technology evolves, many of the elements described herein may be replaced with equivalent elements that appear after the present disclosure.