Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
In describing embodiments of the present disclosure, the term "comprising" and its like should be understood to be open-ended, i.e., including, but not limited to. The term "based on" should be understood as "based at least in part on". The term "one embodiment" or "the embodiment" should be understood as "at least one embodiment". The terms "first," "second," and the like, may refer to different or the same object unless explicitly stated otherwise. Other explicit and implicit definitions are also possible below.
The basic principles and implementations of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the exemplary embodiments are presented merely to enable those skilled in the art to better understand and practice the embodiments of the present disclosure and are not intended to limit the scope of the present disclosure in any way.
Fig. 1 illustrates a schematic diagram of an environment 100 in which various embodiments of the present disclosure can be implemented. In the environment 100 shown in fig. 1, when a user opens an application through an electronic device 102, the application may load data from a database or server through a long list. The data may be static or may be dynamically loaded. The elements used to carry content according to the embodiments shown in this disclosure may be referred to as child nodes, such as child node 104. The child node 104 may be used to present multi-modal content such as icons, text, pictures, video, and the like. The size of the plurality of child nodes may be the same or different. It should be appreciated that the electronic device 102 may also be configured as a computing system, a single server, a distributed server, or a cloud-based server, etc., as a laptop computer, a user terminal, a mobile device, a computer, etc., as a combination of the above.
These child nodes are part of each list item in the long list. According to some embodiments of the present disclosure, the child nodes may represent merchandise items, and each child node contains merchandise pictures, merchandise names, and the like data. When a user opens an application, the long list may fill in the content visible to the current screen of the electronic device 102 by loading and rendering the child nodes.
In some scenarios, rendering schemes for electronic devices typically rely on system-provided application interface APIs to implement core rendering logic, resulting in cross-platform discrepancy issues. The behavior of different platforms is inconsistent, e.g., the rendering mechanisms of the various mobile operating systems are different. This variability results in a developer having to write a large amount of adaptation logic, increasing the complexity of development and maintenance, and affecting the development experience.
In view of this, embodiments of the present disclosure provide a method of rendering an interface, including receiving, by an application, a rendering request from an operating system. The rendering request is then processed by the element layer of the application. The element layer is implemented by the application and is separate from the operating system. For example, the element layer may be written in a language that is independent of the operating system platform. The elements in the interface are rendered by the element layer based on the processing and the data from the implementation layer of the application. For example, the element layer may adjust the layout and position of elements in the interface. The implementation layer is configured to manage the layout of the interface and pass data for the application to the element layer.
According to the method, rendering logic can be uniformly realized on the application irrelevant to the system, rendering effects of different platforms can be aligned more easily, the difference among the platforms is avoided, the related rendering logic of an operating system is reduced, and the dependence on an API of the operating system is avoided. The method according to the embodiment of the disclosure can also be used for rapidly adapting to a new platform, so that the access cost is reduced, and the development efficiency and experience are improved.
Exemplary embodiments of the present disclosure will be described in detail below with reference to fig. 2 to 14.
Fig. 2 shows a schematic flow diagram of a method 200 for rendering an interface according to an embodiment of the present disclosure. As shown in fig. 2, at block 210, a rendering request from an operating system is received. For example, the rendering request may be an operation for an interface scroll operation, a click operation, a slide operation, a long press operation, a rotate operation, a drag operation, requiring loading or moving one or more of the elements in the interface.
At block 220, the rendering request is processed by an element layer of the application. The element layer is implemented by the application and is separate from the operating system. For example, the application may be, for example, a news program, an electronic shopping program, or a chat program. The element layer in the application may be independent of the operating system platform of the device and adapt to any of the different platforms, and the processing may include analyzing the size of the content amount that needs to load the device background data source for operation.
At block 230, elements in the interface are rendered by the element layer based on the processing and data from the implementation layer of the application. In some embodiments, the layout of the interface to be rendered may be determined by the element layer based on the amount of operation of the rendering request and the element data that can be loaded. According to the method, rendering logic can be realized uniformly, rendering effects of different platforms can be aligned more easily, and the difference among the platforms is avoided, so that the rendering logic related to an operating system is reduced. The method according to the embodiment of the disclosure can also be used for rapidly adapting to a new platform, so that the access cost is reduced, and the development efficiency and experience are improved. Embodiments of a method for rendering an interface are described in detail below with reference to fig. 3-14.
Fig. 3 shows a schematic diagram of an overall architecture for rendering an interface according to an embodiment of the present disclosure. The method 200 may be implemented by the process 300 shown in fig. 3.
As shown in fig. 3, a platform layer 302 may be included, which may be a platform based on a different architecture or system, such as various mobile operating system platforms. The platform layer 302 may include a scroll container (ScrollContainer) that may be used to carry UI components and ensure that content can be smoothly scrolled with user interactions.
In some embodiments, the scrolling container of the platform layer 302 may also include a rendering module 304 that may render the interface presented to the user based on the UI rendering and layout information provided by the element layer 310, thereby ensuring that the interface is properly displayed.
According to some embodiments of the present disclosure, the rolling container of the platform layer 302 may include a gesture or animation module 306. Gesture or animation module 306 may manage and receive gesture interactions and animation scrolling from a user based on an application interface (API) provided by the system. For example, in some embodiments, gesture or animation module 306 may process a gesture from a user or initiate an animated scroll by a user and distribute the scroll distance to element layer 310.
In some embodiments, the scroll container of the platform layer 302 may also include a UI method module 308 that may enable UI-related method calls, such as scrolling to a specified location, dynamically adjusting a layout, and so forth. In other embodiments, UI method module 308 may also process platform layer 302-related logic, such as implementing the effect of node-roll-up, roll inertia adjustment, and the like.
In some embodiments, a method implemented in accordance with the present disclosure may further include an element layer 310, which is a core logic layer and may carry all the capabilities of the element, and may interact with layout engines, data sources, while also scrolling containers to the docking platform layer 302. Specifically, in some embodiments, content including components or sub-nodes such as buttons, text boxes, pictures, etc., as seen by a user at the platform layer 302, is presented at the element layer 310 in the form of element objects and managed by the element layer 310.
In some embodiments, the element layer 310 may be implemented within an application and separate from the operating system in which the application resides. In some embodiments, for example, element layer 310 may be written in C++, thereby implementing a long list component with capability independent of the platform, reducing development and maintenance costs.
It should be understood that in embodiments of the present disclosure, the child nodes, or elements may be the same or different, and may be used interchangeably. For example, in some embodiments, they may all represent UI components or visualized UI entities in an interface as the same object.
In some embodiments, the element layer 310 may have an attribute module 314 that may be used to process all attributes set by the front-end platform layer 302. For example, a button drawn at the platform layer 302 may correspond to an element object in the element layer 310, which may include all properties of the button, such as a style of the button (e.g., blue), a position of the button (coordinates on the screen), and so on.
In some embodiments, the element layer 310 may include an event module 316 for sending or responding to events from the platform layer. For example, in some embodiments, the event module 316 may be responsive to a user scroll action event or to a user click action on a button. In some embodiments, the element layer 310 may also include a rendering module 318 that may determine the manner in which an element or node is rendered. For example, rendering module 318 may determine information of the size, location, spacing, visibility, etc. of the elements to be rendered. In some embodiments, rendering module 318 may translate the elements in element layer 310 into UI components and ultimately be presented on a screen.
According to embodiments of the present disclosure, the element layer 310 may also include a reclamation module 320 that may be used to implement reclamation multiplexing of elements in a child node or interface. In some embodiments, the reclamation module 320 may monitor which elements in the interface have moved outside the interface or are not visible in the screen. The recovery module 320 may then mark it as recoverable and put into a recovery pool instead of being destroyed immediately. The reclamation module 320 redraws the previously occurring elements onto the interface by comparing the contents of the new data to the old elements (e.g., by difference computation).
An implementation layer 322 may also be included according to embodiments of the present disclosure. In the implementation layer 322, there may be a layout manager 336 for UI layout and layout calculations. In some embodiments, layout manager 336 may determine anchor logic to ensure that the user is able to precisely align the content when scrolling, making the scrolling experience smoother. The layout manager 336 may provide different layout approaches such as a linear layout 324, a grid layout 326, and a waterfall flow layout 328.
In some embodiments, the linear layout 324 may be applied to an application such as a vertically or horizontally arranged list, e.g., chat software, news stream application, etc. In the linear layout 324, each node may be arranged in order, rendering only elements within the visible region when scrolling. In some embodiments, the grid layout 326 may be applied in a multi-column application such as an album or photo album where it is desirable to ensure that the height of each column is consistent at the time of rendering so that the grid content is aligned. In some embodiments, the waterfall flow layout 328 may be applied to an e-commerce item list, the elements in the list may have contents of irregular heights, the height of each column may be different, and each element may be sequentially populated into the column with the shortest current height.
In some embodiments, implementation layer 322 may also include an adapter 330. The adapter 330 may parse the data source 332 of the application, e.g., obtain data from a server or local application store, and adapt as needed to update the data as needed according to the requirements of the UI elements or operations. The adapter 330 may also be updated by consuming the difference information data 334, e.g., the adapter 330 may retrieve different new data from the data source by comparing the difference data.
Fig. 4 shows a schematic diagram of a process 400 for rendering an interface according to an embodiment of the disclosure. As shown in fig. 4, in some embodiments, when a user slides a gesture of a platform layer scroll view 402 on an electronic device, the platform layer may send operational information about the scroll distance to a list 404 in the element layer. In some embodiments, the scroll distance represents a pixel value that the user slides. In some embodiments, the platform layer scroll view 402 may also detect a user scroll direction, scroll speed, and the like.
In some embodiments, when the list 404 receives the operation information of the scroll distance, the list 404 may consume the difference data (difference information) 408 through the difference data algorithm 414 to determine a change in the old and new data, such as comparing the previously displayed data (displayed list items) with the new data to be displayed (such as the list items to be displayed after scrolling). In some embodiments, the list 404 may consume the difference data 408 to determine new, deleted, updated list items.
In some embodiments, list 404 may also call a recycle multiplex module 410 in the element layer to determine whether a new list item to be displayed has a multiplex item available in list item cache pool 412. If there are multiplexed list items available in list item cache pool 412, then it may be directly fetched from list item cache pool 412, thereby avoiding newly created duplicate list items. In some embodiments, if there is no matching multiplex item, the data source 406 may be requested to obtain new list item data.
In some embodiments, the reclamation multiplexing module 410 may also monitor which elements in the interface have moved outside the interface or are not visible in the screen. The reclamation multiplexing module 41 may then mark it as a recyclable state and put in the list item cache pool 412.
In some embodiments, the updated data may also be rearranged 416 by a rendering module in the element layer, such as updating new, deleted, moved elements, or list items. In some embodiments, views may also be added or removed from the scrolling view 402 of the platform layer through the list 404, so that local update of the views is realized, overall UI redrawing is avoided, rendering efficiency is improved, and smooth scrolling of the application is ensured.
Fig. 5 shows a schematic diagram of another process 500 for rendering an interface according to an embodiment of the disclosure. In some embodiments, at block 502, the list may parse the difference information and initiate the layout. For example, in accordance with some embodiments of the present disclosure, during the first screen rendering of the interface, the list of element layers may parse the difference information from the data source to determine new added list items, deleted list items, moved list items, modified list items, and so forth. The list may then mark the list items that need to be updated, e.g., as dirty status, and preset the width and height of the list items, thereby causing the layout engine in the element layer to initiate the layout.
At block 504, the list may obtain its own layout results. In some embodiments, for example, the layout engine may send pre-layout list results with multiple list item placeholders to the list based on multiple list items in the list and the layout manner. At block 506, the list may initiate a child node layout in conjunction with the difference information. In some embodiments, for example, the list may populate a pre-layout list having a plurality of list item placeholders with newly added list items, moved list items, modified list items, and the like to determine the child node layout. In some embodiments, the child node layout may also be triggered based on a scroll event 520 sent by the platform layer component.
At block 508, it is determined whether the list area may continue to be filled. In some embodiments, the element layer may determine whether there is a blank region in the list region of the current view. At block 510, when it is determined that there is a blank region in the list region, child node rendering may be initiated. For example, in some embodiments, the element layer may execute a rendering function to generate a complete element tree.
At block 512, child node resolution may be performed. For example, the constructed element tree may be parsed, and an attribute set for each element is set. For example, the parsed set of properties may include styles (such as colors, fonts, margins, etc.), properties (such as width and height, location), events (such as clicks, touches), and gestures (such as swipes, zooms, etc.).
At block 514, a child node layout may be performed. In some embodiments, during the layout stage, the entire element tree previously constructed may be laid out. The specific layout of each element is determined, for example, based on the size of the element, constraints of the parent element, and other attributes.
The layout or layout engine in the element layer may then return child node layout information 516 to the list, which may again complete the fill and determine if there is a blank region, if so, may continue initiating child node rendering.
At block 518, the platform layer generates UI information or layout information. In some embodiments, when the visible area is fully filled by the child nodes, the view update and layout update generated in the process of rendering the child nodes can be refreshed to the platform layer in the form of a UI operation queue, and the platform layer can complete view rendering in turn. Operations 502 through 514 may be implemented at an element layer and an implementation layer in an application, while operations 518 and 520 may be implemented at a platform layer of an operating system, according to embodiments of the present disclosure.
FIG. 6 illustrates a schematic diagram of a process 600 for rendering a single column layout of an interface, according to an embodiment of the disclosure. According to embodiments of the present disclosure, a node such as block 614 may represent an unloaded node having a height that is the default height. Each layout may begin at an anchor point location (anchoroffset) and fill the screen blank area of the device as completely as possible. The process shown in fig. 6 is a two-time fill, where the first fill process is blocks 602, 604, 606, and 608, and the two fills are filled to the end (top down).
In some embodiments, the layout start anchor location may be determined by an anchor policy at 602, thereby determining the available area of the area to be filled, where additional area may be added if it is made available. The consumed region sizes may be calculated as the size after layout, one by one, at block 604. Content offsets due to padding may be adjusted at blocks 606 through 608.
The next fill (bottom up) may be performed at step 610 with the area size of the available area minus the consumed area. Content offsets due to padding may be adjusted at blocks 612 through 614. In some embodiments, the padding is terminated when the available area is consumed, and if the number of data sources is insufficient, the offset may be adjusted to create additional area and the direction change begins to be refilled.
Fig. 7 shows a schematic diagram of another process 700 for rendering a single column layout of an interface, in accordance with an embodiment of the disclosure. According to an embodiment of the present disclosure, the process shown in fig. 7 is three fills, with blocks 702-704 being the first fill to end (FillToEnd), blocks 706, 708, 710 being the second fill to start (FillToStart), and blocks 712-714 being the third fill to end (FillToEnd).
The filling method is the same as or similar to that of fig. 6, in some embodiments, the position of the initial anchor point of the layout can be found through an anchor point strategy, the available area of the area to be filled is determined, if additional areas exist, the additional areas are added, then the consumed areas can be calculated with the size after the layout, and the consumed areas are subtracted from the available areas for the next filling. And when the available area is consumed, finishing filling, if the number of the data sources is insufficient, adjusting the offset to generate an additional area, and reversing to start refilling.
FIG. 8 shows a schematic diagram of a process 800 for rendering a multi-column layout of a grid layout of an interface, in accordance with an embodiment of the disclosure. In some embodiments, when there are multiple columns, they may be laid out in a grid-aligned fashion, with the top of each row aligned. A multi-column (stream layout) layout policy is an extension of a single column layout, and the filling of a single column may be changed to preferentially fill a row, e.g., in some embodiments, by an anchor policy determining the anchor location as element 2, and then top-alignment of each row may be performed based on element 2. The layout specific method can be described with reference to fig. 6 to 7.
Fig. 9 shows a schematic diagram of a process 900 for rendering a waterfall layout of an interface, according to an embodiment of the disclosure. The waterfall layout may be represented as a staggered multi-column multiple element layout that may be continually loaded with blocks of element data and appended to the current tail as the page scrollbar scrolls down. The waterfall layout in some embodiments is compact, with each element being arranged on the currently shortest column, leaving no extra space for the elements on the scrolling main axis of the screen. Each element is arranged in which column is associated with its forward element layout result.
In some embodiments the layout may be based on difference data. For example, at block 902, rendering of all child nodes within a viewable area in a list may be initiated according to a preset height of child nodes in the list. In some embodiments, at blocks 904, 906, 908, after the layout information for the nodes (e.g., the actual true height of each node) is obtained, the layout flow may be re-executed. If the child nodes which are not rendered still exist in the list visible area, continuing to initiate the rendering of the child nodes in the visible area.
Fig. 10 shows a schematic diagram of another process 1000 for rendering a waterfall layout of an interface, according to an embodiment of the disclosure. The process is to fill to end (top down), in which the offset and the position of element 0 need not be adjusted, only the end position of the item placeholder currently having the element need be calculated, and the next element is filled in the shortest column. For example, in some embodiments, an anchor point location may be determined to be element 2 based on an anchor point policy and then the next element is filled in the shortest column therein.
Fig. 11 shows a schematic diagram of yet another process 1100 for rendering a waterfall layout of an interface, according to an embodiment of the disclosure. The process may be a fill-to-start process (bottom-up), with block 1102 being an initial state. At block 1104, the offset may be adjusted, which may be accomplished by finding an anchor, i.e., an element with a minimum top item placeholder, such as element 4 in block 1104. In some embodiments, the top of the item placeholder may be aligned with the spacing of the current offset, and the offset = anchor- > newTop-alignment may be recalculated after all item placeholders are re-brushed according to the waterfall flow layout at the end of each layout of the list item components.
In some embodiments, the size between the top line of the item placeholder with the element on the current device display screen and the available area at that time is recalculated as a result of the offset change, as shown in block 1106. In some embodiments, the next node to be laid out may also be determined based on the top line, and repeated until the nodes on the current device display are completely filled, as shown in block 1108.
Fig. 12 shows a schematic diagram of a process 1200 for performing asynchronous layout according to an embodiment of the disclosure. In some scenarios, under an asynchronous layout or a multithreading model, a long list cannot synchronously acquire the true width and height of a child node in the process of rendering the child node, so that whether the visual space is filled or not cannot be determined. Thus, an asynchronous layout or a multi-threaded model may perform multiple layouts or layouts as compared to a list single layout under a single-threaded model.
In some embodiments, during the first layout at block 1202, an element layer according to embodiments of the present disclosure may trigger rendering of all child nodes within the visual range of the current device display screen according to a preset width and height of the child node (which may be set by the front end), which may be the first layout. For example, in some embodiments, each child node may correspond to an item placeholder (such as item placeholder 0, item placeholder 1, item placeholder 2) in a long list for storing its layout information.
In some embodiments, at block 1204, a unique operation ID may be generated for each invocation of child node rendering, placing < operation ID, item placeholder > into a map, which may be stored in data source 1206. In some embodiments, the component may also be bound with an operation ID at block 1208.
In some embodiments, at block 1210, when the single node asynchronous rendering is complete, the components of the current rendering may be passed back to the long list node along with the operation ID. The long list may find the corresponding item placeholder from the map according to the operation ID, completing the binding component and the placeholder. In some embodiments, the long list may obtain the true width and height of the component, and the long list layout, which may be the second layout, is re-triggered. At step 1212, if the visual space is not filled, rendering of more child nodes may continue to be triggered. At step 1214, if the visual space is filled, the rendering may end and the corresponding component is retrieved by the data source 1214.
Thus, the method implemented by the present disclosure may provide a solution for long list rendering in a cross-platform framework that does not rely on any higher-level platform layer components, sinking the core capabilities of the long list to the c++ side. The platform layer may retain a basic rolling container to carry view and interaction functions. An element layer implemented in accordance with the present disclosure may implement a long list component with core capabilities that are platform independent.
The scheme realized according to the present disclosure not only can ensure very good multi-terminal consistency, but also greatly reduces the development and maintenance costs of realizing long lists on new platforms. According to the method, the long list component can align core capabilities of rendering performance, rolling interaction, event system and the like on multiple platforms, multi-terminal consistency is remarkably improved, single-terminal attribute is achieved, and development experience and maintenance cost are greatly improved. The long list components are implemented in multiple platforms smoothly at very low cost.
Fig. 13 shows a schematic block diagram of an apparatus 1300 for rendering an interface according to an embodiment of the disclosure. The apparatus 1300 may be used to implement the methods or steps described with reference to fig. 1-12.
As shown in fig. 13, the apparatus 1300 includes a receiving unit 1310, a processing unit 1320, and a rendering unit 1330. The receiving unit 1310 is configured to receive a rendering request from an operating system. The processing unit 1320 is configured to process the rendering request by an element layer of the application, wherein the element layer is implemented by the application and separate from the operating system. The rendering unit 1330 is configured to render, by the element layer, elements in the interface based on the processing and data from an implementation layer of the application, wherein the implementation layer is configured to manage a layout of the interface and pass data for the application to the element layer.
In some embodiments, the receiving unit 1310 is configured to pass the rendering request by a platform layer of the operating system to the element layer of the application, and wherein the rendering request includes one or more of a scroll operation, a click operation, a slide operation, a long press operation, a rotate operation, a drag operation on the interface.
In some embodiments, rendering unit 1330 is configured to add or remove one or more elements in the interface by the element layer in response to the rendering request.
In some embodiments, rendering unit 1330 is configured to determine, by the element, whether a blank area exists in the interface, to re-read, by the element layer, data from a data source of the implementation layer to fill into the blank area in response to the blank area existing in the interface, and to stop reading, by the element layer, data from the data source in response to the blank area not existing in the interface.
In some embodiments, rendering unit 1330 is configured to identify one or more elements in the interface, reclaim the identified one or more elements by the element layer in response to the identified one or more elements being not visible in the interface due to the rendering request, compare new data from a data source to data in the reclaimed one or more elements by the element layer, and draw the reclaimed one or more elements on the interface by the element layer in response to the new data from the data source being the same as the data in the reclaimed one or more elements.
In some embodiments, the layout of the interface implementing layer management includes one or more of a linear layout, a grid layout, and a waterfall layout.
In some embodiments, rendering unit 1330 is configured to determine, by the element layer, a starting position in the element in the interface and fill a blank area in the interface twice in a top-down and bottom-up manner in response to the layout being a single column layout of the linear layout, and to refill, by the element layer, the blank area in the interface in a top-down manner in response to a blank area still being present in the interface.
In some embodiments, rendering unit 1330 is configured to preferentially fill each row in the interface by the element layer in response to the layout being a multi-column layout of the grid layout.
In some embodiments, the rendering unit 1330 is configured to determine, by the element layer, a first layout based on a preset height of the element in response to the layout being the waterfall layout, and determine, by the element layer, a second layout based on the layout information in response to acquiring the layout information of the element.
In some embodiments, rendering unit 1330 is configured to determine, by the element layer, an end position of an item placeholder having the element and fill a next element in a shorter column in response to the waterfall layout being top-down filled, determine, by the element layer, an offset after each layout in response to the waterfall layout being bottom-up filled, and adjust the layout based on the offset.
In some embodiments, the interface is a long list layout, the elements in the interface are list items in the long list, and the list items have a mapping relationship with operations in the rendering request.
It should be noted that further actions or steps shown with reference to fig. 1 to 12 may be implemented by the apparatus 1300 shown in fig. 13. For example, apparatus 1300 may include additional modules or units to implement the acts or steps described above, or some of the units or modules shown in fig. 13 may be further configured to implement the acts or steps described above. And will not be repeated here.
Fig. 14 shows a schematic block diagram of an example device 1400 that may be used to implement embodiments of the present disclosure. As shown, the device 1400 includes a computing unit (CPU) 1401 that may perform various suitable actions and processes in accordance with computer program instructions stored in a Read Only Memory (ROM) 1402 or loaded from a storage unit 1406 into a Random Access Memory (RAM) 1403. In the RAM 1403, various programs and data required for the operation of the device 1400 can also be stored. The computing unit 1401, the ROM 1402, and the RAM 1403 are connected to each other through a bus 1404. An input/output (I/O) interface 1405 is also connected to the bus 1404.
A number of components in the device 1400 are connected to the I/O interface 1405, including an input unit 1406, e.g., a keyboard, a mouse, etc., an output unit 1407, e.g., various types of displays, speakers, etc., a storage unit 1408, e.g., magnetic disk, optical disk, etc., and a communication unit 1409, e.g., a network card, modem, wireless communication transceiver, etc. The communication unit 1409 allows the device 1400 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunications networks.
The computing unit 1401 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 1401 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 1401 performs the various methods and processes described above, such as the method 200. For example, in some embodiments, the method 200 may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 1408. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 1400 via the ROM 1402 and/or the communication unit 1409. When a computer program is loaded into RAM 1403 and executed by computing unit 1401, one or more steps of method 200 described above may be performed. Alternatively, in other embodiments, computing unit 1401 may be configured to perform method 200 by any other suitable means (e.g. by means of firmware).
In some embodiments, the methods and processes described above may be implemented as a computer program product. The computer program product may include a computer readable storage medium having computer readable program instructions embodied thereon for performing aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium include a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical encoding device, punch cards or intra-groove protrusion structures such as those having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media, as used herein, are not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., optical pulses through fiber optic cables), or electrical signals transmitted through wires.
The computer readable program instructions described herein may be downloaded from a computer readable storage medium to a respective computing/processing device or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. The network interface card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in the respective computing/processing device.
The computer program instructions for performing the operations of the present disclosure can be assembly instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object-oriented programming language and conventional procedural programming languages. The computer readable program instructions may be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present disclosure are implemented by personalizing electronic circuitry, such as programmable logic circuitry, field Programmable Gate Arrays (FPGAs), or Programmable Logic Arrays (PLAs), with state information of computer readable program instructions, which can execute the computer readable program instructions.
These computer readable program instructions may be provided to a processing unit of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processing unit of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of devices, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The foregoing description of the embodiments of the present disclosure has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the technical improvement of the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.