[go: up one dir, main page]

CN119902845B - Method, device, equipment and medium for rendering interface - Google Patents

Method, device, equipment and medium for rendering interface

Info

Publication number
CN119902845B
CN119902845B CN202510097182.7A CN202510097182A CN119902845B CN 119902845 B CN119902845 B CN 119902845B CN 202510097182 A CN202510097182 A CN 202510097182A CN 119902845 B CN119902845 B CN 119902845B
Authority
CN
China
Prior art keywords
interface
layout
rendering
element layer
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202510097182.7A
Other languages
Chinese (zh)
Other versions
CN119902845A (en
Inventor
丁旺
柯伟兵
方舟
夏梦非
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202510097182.7A priority Critical patent/CN119902845B/en
Publication of CN119902845A publication Critical patent/CN119902845A/en
Application granted granted Critical
Publication of CN119902845B publication Critical patent/CN119902845B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本公开提供了用于渲染界面的方法、装置、计算设备、计算机存储介质和计算机程序产品。方法包括:接收来自操作系统的渲染请求;由应用的元素层对渲染请求进行处理,其中元素层由应用实现并且与操作系统分离;以及由元素层基于处理和来自应用的实现层的数据来对界面中的元素进行渲染,其中实现层被配置为管理界面的布局并且向元素层传递针对应用的数据。

The present disclosure provides a method, apparatus, computing device, computer storage medium, and computer program product for rendering an interface. The method includes: receiving a rendering request from an operating system; processing the rendering request by an element layer of an application, wherein the element layer is implemented by the application and is separate from the operating system; and rendering elements in the interface by the element layer based on the processing and data from an implementation layer of the application, wherein the implementation layer is configured to manage the layout of the interface and pass data specific to the application to the element layer.

Description

Method, apparatus, device and medium for rendering interface
Technical Field
The present disclosure relates to the field of computer technology, and more particularly, to a method, apparatus, computing device, computer storage medium, and computer program product for rendering an interface.
Background
With the rapid development of electronic devices such as smart phones, tablet computers, laptop computers, etc., the complexity and functionality of applications are also continuously increasing. Rendering techniques act as the core of User Interfaces (UIs) and graphical displays, affecting user experience, application performance, and resource consumption of the device. On electronic devices, optimization of rendering technology is important due to hardware and battery limitations.
Long lists can be used on electronic devices to present large amounts of data, such as social media messages, news articles, merchandise lists, and the like. The long list can load and render data or views in the page as required, so that the loading time of the mobile terminal in the initialization process is effectively reduced, and the starting speed of the application is improved.
Disclosure of Invention
Embodiments of the present disclosure provide a method, apparatus, computing device, computer storage medium, and computer program product for rendering an interface.
According to a first aspect of the present disclosure, a method for rendering an interface is provided. The method includes receiving a rendering request from an operating system, and processing the rendering request by an element layer of an application, wherein the element layer is implemented by the application and separate from the operating system. The method also includes rendering, by the element layer, elements in the interface based on the processing and the data from an implementation layer of the application, wherein the implementation layer is configured to manage a layout of the interface and pass the data for the application to the element layer.
According to a second aspect of the present disclosure, an apparatus for rendering an interface is provided. The apparatus includes a receiving unit configured to receive a rendering request from an operating system, a processing unit configured to process the rendering request by an element layer of an application, wherein the element layer is implemented by the application and separate from the operating system, and a rendering unit configured to render elements in an interface by the element layer based on the processing and data from an implementation layer of the application, wherein the implementation layer is configured to manage a layout of the interface and pass data for the application to the element layer.
According to a third aspect of the present disclosure there is provided a computing device comprising at least one processing unit, at least one memory coupled to the at least one processing unit and storing instructions for execution by the at least one processing unit, which when executed by the at least one processing unit, cause the computing device to perform the method of the first aspect of the present disclosure.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer storage medium comprising machine executable instructions which, when executed by a device, cause the device to perform the method of the first aspect of the present disclosure.
According to a fifth aspect of the present disclosure, there is provided a computer program product comprising machine executable instructions which, when executed by an apparatus, cause the apparatus to perform the method according to the first aspect of the present disclosure.
It should be understood that the summary is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The above and other objects, features and advantages of embodiments of the present disclosure will become more readily apparent from the following detailed description with reference to the accompanying drawings. Embodiments of the present disclosure will now be described, by way of example and not limitation, in the figures of the accompanying drawings, in which:
FIG. 1 illustrates a schematic diagram of an example environment in which embodiments according to the present disclosure may be implemented;
FIG. 2 shows a schematic flow diagram of a method for rendering an interface according to an embodiment of the present disclosure;
FIG. 3 shows a schematic diagram of an overall architecture for rendering an interface according to an embodiment of the present disclosure;
FIG. 4 shows a schematic diagram of a process for rendering an interface according to an embodiment of the present disclosure;
FIG. 5 shows a schematic diagram of another process for rendering an interface according to an embodiment of the present disclosure;
FIG. 6 illustrates a schematic diagram of a process for rendering a single column layout of an interface, according to an embodiment of the present disclosure;
FIG. 7 shows a schematic diagram of another process for rendering a single column layout of an interface, according to an embodiment of the present disclosure;
FIG. 8 illustrates a schematic diagram of a process for rendering a multi-column layout of a grid layout of an interface, in accordance with an embodiment of the present disclosure;
FIG. 9 shows a schematic diagram of a process for rendering a waterfall layout of an interface, according to an embodiment of the present disclosure;
FIG. 10 shows a schematic diagram of another process for rendering a waterfall layout of an interface, according to an embodiment of the disclosure;
FIG. 11 shows a schematic diagram of yet another process for rendering a waterfall layout of an interface, according to an embodiment of the disclosure;
FIG. 12 shows a schematic diagram of a process for performing asynchronous layout according to an embodiment of the present disclosure;
FIG. 13 shows a schematic block diagram of an apparatus for rendering an interface in accordance with an embodiment of the disclosure, and
FIG. 14 shows a schematic block diagram of an example device that may be used to implement embodiments of the present disclosure.
The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
In describing embodiments of the present disclosure, the term "comprising" and its like should be understood to be open-ended, i.e., including, but not limited to. The term "based on" should be understood as "based at least in part on". The term "one embodiment" or "the embodiment" should be understood as "at least one embodiment". The terms "first," "second," and the like, may refer to different or the same object unless explicitly stated otherwise. Other explicit and implicit definitions are also possible below.
The basic principles and implementations of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the exemplary embodiments are presented merely to enable those skilled in the art to better understand and practice the embodiments of the present disclosure and are not intended to limit the scope of the present disclosure in any way.
Fig. 1 illustrates a schematic diagram of an environment 100 in which various embodiments of the present disclosure can be implemented. In the environment 100 shown in fig. 1, when a user opens an application through an electronic device 102, the application may load data from a database or server through a long list. The data may be static or may be dynamically loaded. The elements used to carry content according to the embodiments shown in this disclosure may be referred to as child nodes, such as child node 104. The child node 104 may be used to present multi-modal content such as icons, text, pictures, video, and the like. The size of the plurality of child nodes may be the same or different. It should be appreciated that the electronic device 102 may also be configured as a computing system, a single server, a distributed server, or a cloud-based server, etc., as a laptop computer, a user terminal, a mobile device, a computer, etc., as a combination of the above.
These child nodes are part of each list item in the long list. According to some embodiments of the present disclosure, the child nodes may represent merchandise items, and each child node contains merchandise pictures, merchandise names, and the like data. When a user opens an application, the long list may fill in the content visible to the current screen of the electronic device 102 by loading and rendering the child nodes.
In some scenarios, rendering schemes for electronic devices typically rely on system-provided application interface APIs to implement core rendering logic, resulting in cross-platform discrepancy issues. The behavior of different platforms is inconsistent, e.g., the rendering mechanisms of the various mobile operating systems are different. This variability results in a developer having to write a large amount of adaptation logic, increasing the complexity of development and maintenance, and affecting the development experience.
In view of this, embodiments of the present disclosure provide a method of rendering an interface, including receiving, by an application, a rendering request from an operating system. The rendering request is then processed by the element layer of the application. The element layer is implemented by the application and is separate from the operating system. For example, the element layer may be written in a language that is independent of the operating system platform. The elements in the interface are rendered by the element layer based on the processing and the data from the implementation layer of the application. For example, the element layer may adjust the layout and position of elements in the interface. The implementation layer is configured to manage the layout of the interface and pass data for the application to the element layer.
According to the method, rendering logic can be uniformly realized on the application irrelevant to the system, rendering effects of different platforms can be aligned more easily, the difference among the platforms is avoided, the related rendering logic of an operating system is reduced, and the dependence on an API of the operating system is avoided. The method according to the embodiment of the disclosure can also be used for rapidly adapting to a new platform, so that the access cost is reduced, and the development efficiency and experience are improved.
Exemplary embodiments of the present disclosure will be described in detail below with reference to fig. 2 to 14.
Fig. 2 shows a schematic flow diagram of a method 200 for rendering an interface according to an embodiment of the present disclosure. As shown in fig. 2, at block 210, a rendering request from an operating system is received. For example, the rendering request may be an operation for an interface scroll operation, a click operation, a slide operation, a long press operation, a rotate operation, a drag operation, requiring loading or moving one or more of the elements in the interface.
At block 220, the rendering request is processed by an element layer of the application. The element layer is implemented by the application and is separate from the operating system. For example, the application may be, for example, a news program, an electronic shopping program, or a chat program. The element layer in the application may be independent of the operating system platform of the device and adapt to any of the different platforms, and the processing may include analyzing the size of the content amount that needs to load the device background data source for operation.
At block 230, elements in the interface are rendered by the element layer based on the processing and data from the implementation layer of the application. In some embodiments, the layout of the interface to be rendered may be determined by the element layer based on the amount of operation of the rendering request and the element data that can be loaded. According to the method, rendering logic can be realized uniformly, rendering effects of different platforms can be aligned more easily, and the difference among the platforms is avoided, so that the rendering logic related to an operating system is reduced. The method according to the embodiment of the disclosure can also be used for rapidly adapting to a new platform, so that the access cost is reduced, and the development efficiency and experience are improved. Embodiments of a method for rendering an interface are described in detail below with reference to fig. 3-14.
Fig. 3 shows a schematic diagram of an overall architecture for rendering an interface according to an embodiment of the present disclosure. The method 200 may be implemented by the process 300 shown in fig. 3.
As shown in fig. 3, a platform layer 302 may be included, which may be a platform based on a different architecture or system, such as various mobile operating system platforms. The platform layer 302 may include a scroll container (ScrollContainer) that may be used to carry UI components and ensure that content can be smoothly scrolled with user interactions.
In some embodiments, the scrolling container of the platform layer 302 may also include a rendering module 304 that may render the interface presented to the user based on the UI rendering and layout information provided by the element layer 310, thereby ensuring that the interface is properly displayed.
According to some embodiments of the present disclosure, the rolling container of the platform layer 302 may include a gesture or animation module 306. Gesture or animation module 306 may manage and receive gesture interactions and animation scrolling from a user based on an application interface (API) provided by the system. For example, in some embodiments, gesture or animation module 306 may process a gesture from a user or initiate an animated scroll by a user and distribute the scroll distance to element layer 310.
In some embodiments, the scroll container of the platform layer 302 may also include a UI method module 308 that may enable UI-related method calls, such as scrolling to a specified location, dynamically adjusting a layout, and so forth. In other embodiments, UI method module 308 may also process platform layer 302-related logic, such as implementing the effect of node-roll-up, roll inertia adjustment, and the like.
In some embodiments, a method implemented in accordance with the present disclosure may further include an element layer 310, which is a core logic layer and may carry all the capabilities of the element, and may interact with layout engines, data sources, while also scrolling containers to the docking platform layer 302. Specifically, in some embodiments, content including components or sub-nodes such as buttons, text boxes, pictures, etc., as seen by a user at the platform layer 302, is presented at the element layer 310 in the form of element objects and managed by the element layer 310.
In some embodiments, the element layer 310 may be implemented within an application and separate from the operating system in which the application resides. In some embodiments, for example, element layer 310 may be written in C++, thereby implementing a long list component with capability independent of the platform, reducing development and maintenance costs.
It should be understood that in embodiments of the present disclosure, the child nodes, or elements may be the same or different, and may be used interchangeably. For example, in some embodiments, they may all represent UI components or visualized UI entities in an interface as the same object.
In some embodiments, the element layer 310 may have an attribute module 314 that may be used to process all attributes set by the front-end platform layer 302. For example, a button drawn at the platform layer 302 may correspond to an element object in the element layer 310, which may include all properties of the button, such as a style of the button (e.g., blue), a position of the button (coordinates on the screen), and so on.
In some embodiments, the element layer 310 may include an event module 316 for sending or responding to events from the platform layer. For example, in some embodiments, the event module 316 may be responsive to a user scroll action event or to a user click action on a button. In some embodiments, the element layer 310 may also include a rendering module 318 that may determine the manner in which an element or node is rendered. For example, rendering module 318 may determine information of the size, location, spacing, visibility, etc. of the elements to be rendered. In some embodiments, rendering module 318 may translate the elements in element layer 310 into UI components and ultimately be presented on a screen.
According to embodiments of the present disclosure, the element layer 310 may also include a reclamation module 320 that may be used to implement reclamation multiplexing of elements in a child node or interface. In some embodiments, the reclamation module 320 may monitor which elements in the interface have moved outside the interface or are not visible in the screen. The recovery module 320 may then mark it as recoverable and put into a recovery pool instead of being destroyed immediately. The reclamation module 320 redraws the previously occurring elements onto the interface by comparing the contents of the new data to the old elements (e.g., by difference computation).
An implementation layer 322 may also be included according to embodiments of the present disclosure. In the implementation layer 322, there may be a layout manager 336 for UI layout and layout calculations. In some embodiments, layout manager 336 may determine anchor logic to ensure that the user is able to precisely align the content when scrolling, making the scrolling experience smoother. The layout manager 336 may provide different layout approaches such as a linear layout 324, a grid layout 326, and a waterfall flow layout 328.
In some embodiments, the linear layout 324 may be applied to an application such as a vertically or horizontally arranged list, e.g., chat software, news stream application, etc. In the linear layout 324, each node may be arranged in order, rendering only elements within the visible region when scrolling. In some embodiments, the grid layout 326 may be applied in a multi-column application such as an album or photo album where it is desirable to ensure that the height of each column is consistent at the time of rendering so that the grid content is aligned. In some embodiments, the waterfall flow layout 328 may be applied to an e-commerce item list, the elements in the list may have contents of irregular heights, the height of each column may be different, and each element may be sequentially populated into the column with the shortest current height.
In some embodiments, implementation layer 322 may also include an adapter 330. The adapter 330 may parse the data source 332 of the application, e.g., obtain data from a server or local application store, and adapt as needed to update the data as needed according to the requirements of the UI elements or operations. The adapter 330 may also be updated by consuming the difference information data 334, e.g., the adapter 330 may retrieve different new data from the data source by comparing the difference data.
Fig. 4 shows a schematic diagram of a process 400 for rendering an interface according to an embodiment of the disclosure. As shown in fig. 4, in some embodiments, when a user slides a gesture of a platform layer scroll view 402 on an electronic device, the platform layer may send operational information about the scroll distance to a list 404 in the element layer. In some embodiments, the scroll distance represents a pixel value that the user slides. In some embodiments, the platform layer scroll view 402 may also detect a user scroll direction, scroll speed, and the like.
In some embodiments, when the list 404 receives the operation information of the scroll distance, the list 404 may consume the difference data (difference information) 408 through the difference data algorithm 414 to determine a change in the old and new data, such as comparing the previously displayed data (displayed list items) with the new data to be displayed (such as the list items to be displayed after scrolling). In some embodiments, the list 404 may consume the difference data 408 to determine new, deleted, updated list items.
In some embodiments, list 404 may also call a recycle multiplex module 410 in the element layer to determine whether a new list item to be displayed has a multiplex item available in list item cache pool 412. If there are multiplexed list items available in list item cache pool 412, then it may be directly fetched from list item cache pool 412, thereby avoiding newly created duplicate list items. In some embodiments, if there is no matching multiplex item, the data source 406 may be requested to obtain new list item data.
In some embodiments, the reclamation multiplexing module 410 may also monitor which elements in the interface have moved outside the interface or are not visible in the screen. The reclamation multiplexing module 41 may then mark it as a recyclable state and put in the list item cache pool 412.
In some embodiments, the updated data may also be rearranged 416 by a rendering module in the element layer, such as updating new, deleted, moved elements, or list items. In some embodiments, views may also be added or removed from the scrolling view 402 of the platform layer through the list 404, so that local update of the views is realized, overall UI redrawing is avoided, rendering efficiency is improved, and smooth scrolling of the application is ensured.
Fig. 5 shows a schematic diagram of another process 500 for rendering an interface according to an embodiment of the disclosure. In some embodiments, at block 502, the list may parse the difference information and initiate the layout. For example, in accordance with some embodiments of the present disclosure, during the first screen rendering of the interface, the list of element layers may parse the difference information from the data source to determine new added list items, deleted list items, moved list items, modified list items, and so forth. The list may then mark the list items that need to be updated, e.g., as dirty status, and preset the width and height of the list items, thereby causing the layout engine in the element layer to initiate the layout.
At block 504, the list may obtain its own layout results. In some embodiments, for example, the layout engine may send pre-layout list results with multiple list item placeholders to the list based on multiple list items in the list and the layout manner. At block 506, the list may initiate a child node layout in conjunction with the difference information. In some embodiments, for example, the list may populate a pre-layout list having a plurality of list item placeholders with newly added list items, moved list items, modified list items, and the like to determine the child node layout. In some embodiments, the child node layout may also be triggered based on a scroll event 520 sent by the platform layer component.
At block 508, it is determined whether the list area may continue to be filled. In some embodiments, the element layer may determine whether there is a blank region in the list region of the current view. At block 510, when it is determined that there is a blank region in the list region, child node rendering may be initiated. For example, in some embodiments, the element layer may execute a rendering function to generate a complete element tree.
At block 512, child node resolution may be performed. For example, the constructed element tree may be parsed, and an attribute set for each element is set. For example, the parsed set of properties may include styles (such as colors, fonts, margins, etc.), properties (such as width and height, location), events (such as clicks, touches), and gestures (such as swipes, zooms, etc.).
At block 514, a child node layout may be performed. In some embodiments, during the layout stage, the entire element tree previously constructed may be laid out. The specific layout of each element is determined, for example, based on the size of the element, constraints of the parent element, and other attributes.
The layout or layout engine in the element layer may then return child node layout information 516 to the list, which may again complete the fill and determine if there is a blank region, if so, may continue initiating child node rendering.
At block 518, the platform layer generates UI information or layout information. In some embodiments, when the visible area is fully filled by the child nodes, the view update and layout update generated in the process of rendering the child nodes can be refreshed to the platform layer in the form of a UI operation queue, and the platform layer can complete view rendering in turn. Operations 502 through 514 may be implemented at an element layer and an implementation layer in an application, while operations 518 and 520 may be implemented at a platform layer of an operating system, according to embodiments of the present disclosure.
FIG. 6 illustrates a schematic diagram of a process 600 for rendering a single column layout of an interface, according to an embodiment of the disclosure. According to embodiments of the present disclosure, a node such as block 614 may represent an unloaded node having a height that is the default height. Each layout may begin at an anchor point location (anchoroffset) and fill the screen blank area of the device as completely as possible. The process shown in fig. 6 is a two-time fill, where the first fill process is blocks 602, 604, 606, and 608, and the two fills are filled to the end (top down).
In some embodiments, the layout start anchor location may be determined by an anchor policy at 602, thereby determining the available area of the area to be filled, where additional area may be added if it is made available. The consumed region sizes may be calculated as the size after layout, one by one, at block 604. Content offsets due to padding may be adjusted at blocks 606 through 608.
The next fill (bottom up) may be performed at step 610 with the area size of the available area minus the consumed area. Content offsets due to padding may be adjusted at blocks 612 through 614. In some embodiments, the padding is terminated when the available area is consumed, and if the number of data sources is insufficient, the offset may be adjusted to create additional area and the direction change begins to be refilled.
Fig. 7 shows a schematic diagram of another process 700 for rendering a single column layout of an interface, in accordance with an embodiment of the disclosure. According to an embodiment of the present disclosure, the process shown in fig. 7 is three fills, with blocks 702-704 being the first fill to end (FillToEnd), blocks 706, 708, 710 being the second fill to start (FillToStart), and blocks 712-714 being the third fill to end (FillToEnd).
The filling method is the same as or similar to that of fig. 6, in some embodiments, the position of the initial anchor point of the layout can be found through an anchor point strategy, the available area of the area to be filled is determined, if additional areas exist, the additional areas are added, then the consumed areas can be calculated with the size after the layout, and the consumed areas are subtracted from the available areas for the next filling. And when the available area is consumed, finishing filling, if the number of the data sources is insufficient, adjusting the offset to generate an additional area, and reversing to start refilling.
FIG. 8 shows a schematic diagram of a process 800 for rendering a multi-column layout of a grid layout of an interface, in accordance with an embodiment of the disclosure. In some embodiments, when there are multiple columns, they may be laid out in a grid-aligned fashion, with the top of each row aligned. A multi-column (stream layout) layout policy is an extension of a single column layout, and the filling of a single column may be changed to preferentially fill a row, e.g., in some embodiments, by an anchor policy determining the anchor location as element 2, and then top-alignment of each row may be performed based on element 2. The layout specific method can be described with reference to fig. 6 to 7.
Fig. 9 shows a schematic diagram of a process 900 for rendering a waterfall layout of an interface, according to an embodiment of the disclosure. The waterfall layout may be represented as a staggered multi-column multiple element layout that may be continually loaded with blocks of element data and appended to the current tail as the page scrollbar scrolls down. The waterfall layout in some embodiments is compact, with each element being arranged on the currently shortest column, leaving no extra space for the elements on the scrolling main axis of the screen. Each element is arranged in which column is associated with its forward element layout result.
In some embodiments the layout may be based on difference data. For example, at block 902, rendering of all child nodes within a viewable area in a list may be initiated according to a preset height of child nodes in the list. In some embodiments, at blocks 904, 906, 908, after the layout information for the nodes (e.g., the actual true height of each node) is obtained, the layout flow may be re-executed. If the child nodes which are not rendered still exist in the list visible area, continuing to initiate the rendering of the child nodes in the visible area.
Fig. 10 shows a schematic diagram of another process 1000 for rendering a waterfall layout of an interface, according to an embodiment of the disclosure. The process is to fill to end (top down), in which the offset and the position of element 0 need not be adjusted, only the end position of the item placeholder currently having the element need be calculated, and the next element is filled in the shortest column. For example, in some embodiments, an anchor point location may be determined to be element 2 based on an anchor point policy and then the next element is filled in the shortest column therein.
Fig. 11 shows a schematic diagram of yet another process 1100 for rendering a waterfall layout of an interface, according to an embodiment of the disclosure. The process may be a fill-to-start process (bottom-up), with block 1102 being an initial state. At block 1104, the offset may be adjusted, which may be accomplished by finding an anchor, i.e., an element with a minimum top item placeholder, such as element 4 in block 1104. In some embodiments, the top of the item placeholder may be aligned with the spacing of the current offset, and the offset = anchor- > newTop-alignment may be recalculated after all item placeholders are re-brushed according to the waterfall flow layout at the end of each layout of the list item components.
In some embodiments, the size between the top line of the item placeholder with the element on the current device display screen and the available area at that time is recalculated as a result of the offset change, as shown in block 1106. In some embodiments, the next node to be laid out may also be determined based on the top line, and repeated until the nodes on the current device display are completely filled, as shown in block 1108.
Fig. 12 shows a schematic diagram of a process 1200 for performing asynchronous layout according to an embodiment of the disclosure. In some scenarios, under an asynchronous layout or a multithreading model, a long list cannot synchronously acquire the true width and height of a child node in the process of rendering the child node, so that whether the visual space is filled or not cannot be determined. Thus, an asynchronous layout or a multi-threaded model may perform multiple layouts or layouts as compared to a list single layout under a single-threaded model.
In some embodiments, during the first layout at block 1202, an element layer according to embodiments of the present disclosure may trigger rendering of all child nodes within the visual range of the current device display screen according to a preset width and height of the child node (which may be set by the front end), which may be the first layout. For example, in some embodiments, each child node may correspond to an item placeholder (such as item placeholder 0, item placeholder 1, item placeholder 2) in a long list for storing its layout information.
In some embodiments, at block 1204, a unique operation ID may be generated for each invocation of child node rendering, placing < operation ID, item placeholder > into a map, which may be stored in data source 1206. In some embodiments, the component may also be bound with an operation ID at block 1208.
In some embodiments, at block 1210, when the single node asynchronous rendering is complete, the components of the current rendering may be passed back to the long list node along with the operation ID. The long list may find the corresponding item placeholder from the map according to the operation ID, completing the binding component and the placeholder. In some embodiments, the long list may obtain the true width and height of the component, and the long list layout, which may be the second layout, is re-triggered. At step 1212, if the visual space is not filled, rendering of more child nodes may continue to be triggered. At step 1214, if the visual space is filled, the rendering may end and the corresponding component is retrieved by the data source 1214.
Thus, the method implemented by the present disclosure may provide a solution for long list rendering in a cross-platform framework that does not rely on any higher-level platform layer components, sinking the core capabilities of the long list to the c++ side. The platform layer may retain a basic rolling container to carry view and interaction functions. An element layer implemented in accordance with the present disclosure may implement a long list component with core capabilities that are platform independent.
The scheme realized according to the present disclosure not only can ensure very good multi-terminal consistency, but also greatly reduces the development and maintenance costs of realizing long lists on new platforms. According to the method, the long list component can align core capabilities of rendering performance, rolling interaction, event system and the like on multiple platforms, multi-terminal consistency is remarkably improved, single-terminal attribute is achieved, and development experience and maintenance cost are greatly improved. The long list components are implemented in multiple platforms smoothly at very low cost.
Fig. 13 shows a schematic block diagram of an apparatus 1300 for rendering an interface according to an embodiment of the disclosure. The apparatus 1300 may be used to implement the methods or steps described with reference to fig. 1-12.
As shown in fig. 13, the apparatus 1300 includes a receiving unit 1310, a processing unit 1320, and a rendering unit 1330. The receiving unit 1310 is configured to receive a rendering request from an operating system. The processing unit 1320 is configured to process the rendering request by an element layer of the application, wherein the element layer is implemented by the application and separate from the operating system. The rendering unit 1330 is configured to render, by the element layer, elements in the interface based on the processing and data from an implementation layer of the application, wherein the implementation layer is configured to manage a layout of the interface and pass data for the application to the element layer.
In some embodiments, the receiving unit 1310 is configured to pass the rendering request by a platform layer of the operating system to the element layer of the application, and wherein the rendering request includes one or more of a scroll operation, a click operation, a slide operation, a long press operation, a rotate operation, a drag operation on the interface.
In some embodiments, rendering unit 1330 is configured to add or remove one or more elements in the interface by the element layer in response to the rendering request.
In some embodiments, rendering unit 1330 is configured to determine, by the element, whether a blank area exists in the interface, to re-read, by the element layer, data from a data source of the implementation layer to fill into the blank area in response to the blank area existing in the interface, and to stop reading, by the element layer, data from the data source in response to the blank area not existing in the interface.
In some embodiments, rendering unit 1330 is configured to identify one or more elements in the interface, reclaim the identified one or more elements by the element layer in response to the identified one or more elements being not visible in the interface due to the rendering request, compare new data from a data source to data in the reclaimed one or more elements by the element layer, and draw the reclaimed one or more elements on the interface by the element layer in response to the new data from the data source being the same as the data in the reclaimed one or more elements.
In some embodiments, the layout of the interface implementing layer management includes one or more of a linear layout, a grid layout, and a waterfall layout.
In some embodiments, rendering unit 1330 is configured to determine, by the element layer, a starting position in the element in the interface and fill a blank area in the interface twice in a top-down and bottom-up manner in response to the layout being a single column layout of the linear layout, and to refill, by the element layer, the blank area in the interface in a top-down manner in response to a blank area still being present in the interface.
In some embodiments, rendering unit 1330 is configured to preferentially fill each row in the interface by the element layer in response to the layout being a multi-column layout of the grid layout.
In some embodiments, the rendering unit 1330 is configured to determine, by the element layer, a first layout based on a preset height of the element in response to the layout being the waterfall layout, and determine, by the element layer, a second layout based on the layout information in response to acquiring the layout information of the element.
In some embodiments, rendering unit 1330 is configured to determine, by the element layer, an end position of an item placeholder having the element and fill a next element in a shorter column in response to the waterfall layout being top-down filled, determine, by the element layer, an offset after each layout in response to the waterfall layout being bottom-up filled, and adjust the layout based on the offset.
In some embodiments, the interface is a long list layout, the elements in the interface are list items in the long list, and the list items have a mapping relationship with operations in the rendering request.
It should be noted that further actions or steps shown with reference to fig. 1 to 12 may be implemented by the apparatus 1300 shown in fig. 13. For example, apparatus 1300 may include additional modules or units to implement the acts or steps described above, or some of the units or modules shown in fig. 13 may be further configured to implement the acts or steps described above. And will not be repeated here.
Fig. 14 shows a schematic block diagram of an example device 1400 that may be used to implement embodiments of the present disclosure. As shown, the device 1400 includes a computing unit (CPU) 1401 that may perform various suitable actions and processes in accordance with computer program instructions stored in a Read Only Memory (ROM) 1402 or loaded from a storage unit 1406 into a Random Access Memory (RAM) 1403. In the RAM 1403, various programs and data required for the operation of the device 1400 can also be stored. The computing unit 1401, the ROM 1402, and the RAM 1403 are connected to each other through a bus 1404. An input/output (I/O) interface 1405 is also connected to the bus 1404.
A number of components in the device 1400 are connected to the I/O interface 1405, including an input unit 1406, e.g., a keyboard, a mouse, etc., an output unit 1407, e.g., various types of displays, speakers, etc., a storage unit 1408, e.g., magnetic disk, optical disk, etc., and a communication unit 1409, e.g., a network card, modem, wireless communication transceiver, etc. The communication unit 1409 allows the device 1400 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunications networks.
The computing unit 1401 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 1401 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 1401 performs the various methods and processes described above, such as the method 200. For example, in some embodiments, the method 200 may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 1408. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 1400 via the ROM 1402 and/or the communication unit 1409. When a computer program is loaded into RAM 1403 and executed by computing unit 1401, one or more steps of method 200 described above may be performed. Alternatively, in other embodiments, computing unit 1401 may be configured to perform method 200 by any other suitable means (e.g. by means of firmware).
In some embodiments, the methods and processes described above may be implemented as a computer program product. The computer program product may include a computer readable storage medium having computer readable program instructions embodied thereon for performing aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium include a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical encoding device, punch cards or intra-groove protrusion structures such as those having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media, as used herein, are not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., optical pulses through fiber optic cables), or electrical signals transmitted through wires.
The computer readable program instructions described herein may be downloaded from a computer readable storage medium to a respective computing/processing device or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. The network interface card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in the respective computing/processing device.
The computer program instructions for performing the operations of the present disclosure can be assembly instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object-oriented programming language and conventional procedural programming languages. The computer readable program instructions may be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present disclosure are implemented by personalizing electronic circuitry, such as programmable logic circuitry, field Programmable Gate Arrays (FPGAs), or Programmable Logic Arrays (PLAs), with state information of computer readable program instructions, which can execute the computer readable program instructions.
These computer readable program instructions may be provided to a processing unit of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processing unit of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of devices, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The foregoing description of the embodiments of the present disclosure has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the technical improvement of the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (14)

1. A method for rendering an interface, comprising:
receiving a rendering request from an operating system;
Processing the rendering request by an element layer of an application, wherein the element layer is implemented by the application and separate from the operating system;
Rendering, by the element layer, elements in the interface based on the processing and data from an implementation layer of the application, wherein the implementation layer is configured to manage a layout of the interface and pass data for the application to the element layer, and
The interface is a long list layout, the elements in the interface are list items in the long list, and the list items have a mapping relation with operations in the rendering request.
2. The method of claim 1, wherein receiving a rendering request from the operating system comprises:
Passing the rendering request by a platform layer of the operating system to the element layer of the application, and
Wherein the rendering request includes one or more of scrolling operation, clicking operation, sliding operation, long press operation, rotating operation, dragging operation on the interface.
3. The method of claim 1, wherein rendering, by the element layer, the element in the interface based on the processing and data from the implementation layer of the application comprises:
One or more elements are added or removed in the interface by the element layer in response to the rendering request.
4. The method of claim 1, wherein rendering, by the element layer, the element in the interface based on the processing and data from the implementation layer of the application further comprises:
the element layer determines whether a blank area exists in the interface;
Re-reading, by the element layer, data from a data source of the implementation layer to fill in the blank area in response to the blank area being present in the interface, and
In response to the absence of the blank region in the interface, ceasing to read data from the data source by the element layer.
5. The method of claim 1, wherein rendering, by the element layer, the element in the interface based on the processing and data from the implementation layer of the application further comprises:
Identifying one or more elements in the interface;
Recycling, by the element layer, the identified one or more elements in response to the identified one or more elements being not visible in an interface due to the rendering request;
Comparing, by the element layer, new data from the data source with the data in the recovered one or more elements, and
Responsive to the new data from the data source being the same as the data in the one or more elements recovered, the one or more elements recovered are drawn by the element layer on the interface.
6. The method of claim 1, wherein the layout of the interface managed by the implementation layer comprises one or more of:
A linear layout;
grid layout, and
And (5) waterfall layout.
7. The method of claim 6, wherein rendering, by the element layer, the element in the interface based on the processing and data from an implementation layer of the application comprises:
Determining a starting position in the element in the interface by the element layer and filling a blank area in the interface twice in a top-down and bottom-up manner in response to the layout being a single column layout of the linear layout, and
And in response to the blank area still existing in the interface, refilling the blank area in the interface by the element layer in a top-down mode.
8. The method of claim 6, wherein rendering, by the element layer, the element in the interface based on the processing and data from an implementation layer of the application further comprises:
each row in the interface is preferentially populated by the element layer in response to the layout being a multi-column layout of the grid layout.
9. The method of claim 6, wherein rendering, by the element layer, the element in the interface based on the processing and data from an implementation layer of the application further comprises:
Determining, by the element layer, a first layout based on a preset height of the element in response to the layout being the waterfall layout, and
In response to obtaining layout information for the element, a second layout is determined by the element layer based on the layout information.
10. The method of claim 9, wherein rendering, by the element layer, the element in the interface based on the processing and data from an implementation layer of the application further comprises:
Responsive to the waterfall layout being top-down filled, determining, by the element layer, an end position of an item placeholder having the element and filling a next element in a shorter column;
In response to the waterfall layout being bottom-up filled, an offset after each layout is determined by the element layer and the layout is adjusted based on the offset.
11. An apparatus for rendering an interface, comprising:
a receiving unit configured to receive a rendering request from an operating system;
a processing unit configured to process the rendering request by an element layer of an application, wherein the element layer is implemented by the application and is separate from the operating system;
A rendering unit configured to render, by the element layer, elements in the interface based on the processing and data from an implementation layer of the application, wherein the implementation layer is configured to manage a layout of the interface and pass data for the application to the element layer, and
The interface is a long list layout, the elements in the interface are list items in the long list, and the list items have a mapping relation with operations in the rendering request.
12. A computing device, comprising:
at least one processing unit;
at least one memory coupled to the at least one processing unit and storing instructions for execution by the at least one processing unit, the instructions when executed by the at least one processing unit cause the computing device to perform the method of any of claims 1 to 10.
13. A computer storage medium comprising machine executable instructions which, when executed by a device, cause the device to perform the method of any of claims 1 to 10.
14. A computer program product comprising machine executable instructions which, when executed by a device, cause the device to perform the method of any of claims 1 to 10.
CN202510097182.7A 2025-01-21 2025-01-21 Method, device, equipment and medium for rendering interface Active CN119902845B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202510097182.7A CN119902845B (en) 2025-01-21 2025-01-21 Method, device, equipment and medium for rendering interface

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202510097182.7A CN119902845B (en) 2025-01-21 2025-01-21 Method, device, equipment and medium for rendering interface

Publications (2)

Publication Number Publication Date
CN119902845A CN119902845A (en) 2025-04-29
CN119902845B true CN119902845B (en) 2025-10-21

Family

ID=95470330

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202510097182.7A Active CN119902845B (en) 2025-01-21 2025-01-21 Method, device, equipment and medium for rendering interface

Country Status (1)

Country Link
CN (1) CN119902845B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113742014A (en) * 2021-08-11 2021-12-03 深圳Tcl新技术有限公司 Interface rendering method and device, electronic equipment and storage medium
CN118334219A (en) * 2024-04-19 2024-07-12 北京字跳网络技术有限公司 Method, apparatus, device, medium and program product for rendering an object

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7340718B2 (en) * 2002-09-30 2008-03-04 Sap Ag Unified rendering
US10157593B2 (en) * 2014-02-24 2018-12-18 Microsoft Technology Licensing, Llc Cross-platform rendering engine
US20230342166A1 (en) * 2016-04-27 2023-10-26 Coda Project, Inc. System, method, and apparatus for publication and external interfacing for a unified document surface
WO2017195206A1 (en) * 2016-05-11 2017-11-16 Showbox Ltd. Systems and methods for adapting multi-media content objects
CN113626113B (en) * 2020-05-08 2024-04-05 北京沃东天骏信息技术有限公司 Page rendering method and device
CN113421144B (en) * 2021-05-10 2025-03-21 北京沃东天骏信息技术有限公司 A page display method, device, equipment, and storage medium
CN117608445A (en) * 2023-11-22 2024-02-27 北京字跳网络技术有限公司 Application page rendering method, rendering device, electronic device and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113742014A (en) * 2021-08-11 2021-12-03 深圳Tcl新技术有限公司 Interface rendering method and device, electronic equipment and storage medium
CN118334219A (en) * 2024-04-19 2024-07-12 北京字跳网络技术有限公司 Method, apparatus, device, medium and program product for rendering an object

Also Published As

Publication number Publication date
CN119902845A (en) 2025-04-29

Similar Documents

Publication Publication Date Title
CN106062705B (en) Cross-platform rendering engine
CN110096277B (en) Dynamic page display method and device, electronic equipment and storage medium
US10775971B2 (en) Pinch gestures in a tile-based user interface
US9195362B2 (en) Method of rendering a user interface
CN110221889B (en) Page display method and device, electronic equipment and storage medium
US8984448B2 (en) Method of rendering a user interface
US20070226647A1 (en) Methods of manipulating a screen space of a display device
CN109145272B (en) Text rendering and layout method, apparatus, device and storage medium
CN113204401B (en) Browser rendering methods, terminals and storage media
CN110992112A (en) Method and device for processing advertisement information
EP3008697B1 (en) Coalescing graphics operations
CN109800039B (en) User interface display method and device, electronic equipment and storage medium
CN119902845B (en) Method, device, equipment and medium for rendering interface
CN113536755A (en) Method, apparatus, electronic device, storage medium and product for generating posters
CN114237795A (en) Terminal interface display method and device, electronic equipment and readable storage medium
CN112581589A (en) View list layout method, device, equipment and storage medium
CN108021366A (en) Interface animation realization method, device, electronic equipment, storage medium
CN114510159A (en) Writing track display method and device and storage medium
CN111949868A (en) Content access method, device, electronic equipment and storage medium
CN119902846B (en) Method, apparatus, device and medium for rendering elements in an interface
US11899898B2 (en) Coherent gestures on touchpads and touchscreens
CN113961843B (en) A page list refreshing method, device, electronic device and storage medium
CN114398122B (en) Input method, device, electronic device, storage medium and product
CN115908674A (en) Quantum circuit diagram rendering method, device, equipment, storage medium and program product
CN118897678A (en) Flutter component redrawing method, device and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant