CN113134230B - Clustering method and device for virtual objects, storage medium and electronic device - Google Patents
Clustering method and device for virtual objects, storage medium and electronic device Download PDFInfo
- Publication number
- CN113134230B CN113134230B CN202110420679.XA CN202110420679A CN113134230B CN 113134230 B CN113134230 B CN 113134230B CN 202110420679 A CN202110420679 A CN 202110420679A CN 113134230 B CN113134230 B CN 113134230B
- Authority
- CN
- China
- Prior art keywords
- objects
- fusion
- initial
- entity
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 63
- 230000004927 fusion Effects 0.000 claims abstract description 297
- 238000012216 screening Methods 0.000 claims description 10
- 238000004590 computer program Methods 0.000 claims description 4
- 238000012163 sequencing technique Methods 0.000 claims description 2
- 230000008569 process Effects 0.000 description 18
- 238000010586 diagram Methods 0.000 description 14
- 238000012545 processing Methods 0.000 description 10
- 230000005540 biological transmission Effects 0.000 description 6
- 230000000694 effects Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 230000008878 coupling Effects 0.000 description 2
- 238000010168 coupling process Methods 0.000 description 2
- 238000005859 coupling reaction Methods 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000004880 explosion Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 235000019640 taste Nutrition 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/52—Controlling the output signals based on the game progress involving aspects of the displayed game scene
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/60—Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The application relates to a clustering method and device for virtual objects, a storage medium and an electronic device, wherein the method comprises the following steps: voxelization of virtual objects to be clustered in the current scene is carried out to obtain entity objects corresponding to the virtual objects to be clustered; merging the entity objects to obtain initial fusion objects, wherein the volume of the bounding volumes after merging the entity objects included in each initial fusion object falls into a target threshold range; merging the initial fusion objects according to voxel data of the initial fusion objects to obtain target fusion objects, wherein volumes of bounding volumes corresponding to a plurality of entity objects included in the target fusion objects fall into a target threshold range; generating a clustering result according to the target fusion object. The method and the device solve the technical problem of low accuracy in clustering the virtual objects.
Description
Technical Field
The present disclosure relates to the field of computers, and in particular, to a method and apparatus for clustering virtual objects, a storage medium, and an electronic device.
Background
Currently, 3D (three dimensional, three-dimensional) large-scene games are more and more, and the requirements of the games on picture quality are also higher and more. In order to meet the requirements of players on more and more critical tastes of games and higher game quality, MMO (Massive Multiplayer Online, massive multi-player online game) and fire explosion of chicken-eating games lead to larger and larger 3D scenes of mobile phone games.
In the existing HLOD (hierarchical Level-of-Detail) system, on the automatic clustering scheme, an automatic clustering scheme is generally calculated by using a scheme based on a bounding volume (bounding sphere or bounding box), and the merging priority is estimated by a method of calculating the size of the merged bounding volume divided by the percentage of the volume of the merged bounding volume occupied by the volume of the merged bounding volume before merging, that is, under the condition that the limit of the size of the merged volume is met, objects are preferentially merged into clusters with large estimation values, so that the automatic clustering is realized.
With bounding volume based schemes, the results of automatic clustering are not ideal because of the large errors in bounding volume and displayed 3D object mesh shape. In practice, it may happen that the mesh volume is not large, but the enclosure is large, such as a wall of a building, or a road surface, etc., resulting in that objects may merge preferentially into the wall or onto the road.
In view of the above problems, no effective solution has been proposed at present.
Disclosure of Invention
The application provides a virtual object clustering method, a virtual object clustering device, a storage medium and an electronic device, which are used for at least solving the technical problem of low accuracy of virtual object clustering in the related technology.
According to an aspect of an embodiment of the present application, there is provided a clustering method of virtual objects, including:
voxelization of virtual objects to be clustered in a current scene is carried out, and entity objects corresponding to each virtual object to be clustered are obtained;
merging the entity objects to obtain initial fusion objects, wherein the volume of a bounding volume after merging the entity objects included in each initial fusion object falls into a target threshold range;
merging the initial fusion objects according to voxel data of the initial fusion objects to obtain target fusion objects, wherein volumes of bounding volumes corresponding to a plurality of entity objects included in the target fusion objects fall into the target threshold range;
generating a clustering result according to the target fusion object.
According to another aspect of the embodiments of the present application, there is also provided a clustering apparatus for virtual objects, including:
The voxelization module is used for voxelizing the virtual objects to be clustered in the current scene to obtain entity objects corresponding to each virtual object to be clustered;
the merging module is used for merging the entity objects to obtain initial fusion objects, wherein the volume of the bounding volumes after the entity objects included in each initial fusion object are merged falls into a target threshold range;
the clustering module is used for merging the initial fusion objects according to voxel data of the initial fusion objects to obtain target fusion objects, wherein the volumes of the bounding volumes corresponding to the plurality of entity objects included in the target fusion objects fall into the target threshold range; generating a clustering result according to the target fusion object.
According to another aspect of the embodiments of the present application, there is also provided a storage medium including a stored program that when executed performs the above-described method.
According to another aspect of the embodiments of the present application, there is also provided an electronic device including a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor executing the method described above by the computer program.
In the embodiment of the application, the virtual objects to be clustered in the current scene are voxelized to obtain the entity object corresponding to each virtual object to be clustered; merging the entity objects to obtain initial fusion objects, wherein the volume of the bounding volumes after merging the entity objects included in each initial fusion object falls into a target threshold range; merging the initial fusion objects according to voxel data of the initial fusion objects to obtain target fusion objects, wherein volumes of bounding volumes corresponding to a plurality of entity objects included in the target fusion objects fall into a target threshold range; according to the method of generating clustering results of the target fusion objects, voxel processing is carried out on virtual objects to be clustered in a current scene to obtain entity objects, the entity objects are combined to obtain initial fusion objects meeting clustering conditions, then the initial fusion objects are combined to obtain the target fusion objects, and after the virtual objects are subjected to voxel processing according to the clustering results of the target fusion objects, the voxels can reflect the volumes of grids more accurately, so that the obtained bounding volumes of the objects more conform to the real shape of the model, the purpose of reducing errors between the bounding volumes and the displayed grid shapes of the virtual objects is achieved, the technical effect of improving the clustering accuracy of the virtual objects is achieved, and the technical problem that the clustering accuracy of the virtual objects is lower is solved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required to be used in the description of the embodiments or the prior art will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
FIG. 1 is a schematic diagram of a hardware environment of a virtual object clustering method according to an embodiment of the present application;
FIG. 2 is a flow diagram of an alternative virtual object clustering method in accordance with an embodiment of the present application;
FIG. 3 is a schematic diagram of an alternative virtual object voxelization in accordance with embodiments of the present application;
FIG. 4 is a schematic diagram of a process of generating an initial MergeData object according to an alternative embodiment of the present application;
FIG. 5 is a schematic diagram of a process for generating clusters according to an alternative embodiment of the present application;
FIG. 6 is a schematic diagram of an alternative virtual object clustering apparatus in accordance with an embodiment of the present application;
Fig. 7 is a block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the present application solution better understood by those skilled in the art, the following description will be made in detail and with reference to the accompanying drawings in the embodiments of the present application, it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
According to an aspect of the embodiments of the present application, a method embodiment of clustering of virtual objects is provided.
Alternatively, in the present embodiment, the above-described clustering method of virtual objects may be applied to a hardware environment constituted by the terminal 101 and the server 103 as shown in fig. 1. As shown in fig. 1, the server 103 is connected to the terminal 101 through a network, which may be used to provide services (such as game services, application services, etc.) to the terminal or clients installed on the terminal, and a database may be provided on the server or independent of the server, for providing data storage services to the server 103, where the network includes, but is not limited to: the terminal 101 is not limited to a PC, a mobile phone, a tablet computer, or the like. The clustering method of the virtual objects in the embodiment of the present application may be executed by the server 103, may be executed by the terminal 101, or may be executed by both the server 103 and the terminal 101. The clustering method of the virtual objects executed by the terminal 101 according to the embodiment of the present application may also be executed by a client installed thereon.
FIG. 2 is a flow chart of an alternative method of clustering virtual objects, as shown in FIG. 2, according to an embodiment of the present application, the method may include the steps of:
Step S202, voxelizing a virtual object to be clustered in a current scene to obtain a physical object corresponding to each virtual object to be clustered;
step S204, merging the entity objects to obtain initial fusion objects, wherein the volume of the bounding volumes after the entity objects included in each initial fusion object are merged falls into a target threshold range;
after the above step S204, the obtained initial fusion object may be subjected to clustering processing by, but not limited to, the following steps, thereby obtaining a clustering result:
step S206, merging the initial fusion objects according to voxel data of the initial fusion objects to obtain target fusion objects, wherein the volumes of bounding volumes corresponding to a plurality of entity objects included in the target fusion objects fall into the target threshold range;
step S208, generating clustering results according to the target fusion object.
Through the steps S202 to S208, the virtual objects to be clustered in the current scene are subjected to voxel processing to obtain the entity objects, the entity objects are combined to obtain the initial fusion objects meeting the clustering condition, the initial fusion objects are combined to obtain the target fusion objects, and after the virtual objects are subjected to voxel processing according to the clustering result of the target fusion objects, the voxels can more accurately reflect the volumes of the grids, so that the obtained bounding volumes of the objects more conform to the real shape of the model, the purpose of reducing the error between the bounding volumes and the displayed grid shape of the virtual objects is achieved, the technical effect of improving the accuracy of clustering the virtual objects is achieved, and the technical problem of lower accuracy of clustering the virtual objects is solved.
Alternatively, in this embodiment, the method for clustering virtual objects may be, but not limited to, applied to a process of automatically clustering virtual objects (such as: 3D objects) in a scene in an HLOD (hierarchical Level-of-Detail) system.
In the technical solution provided in step S202, the current scene may include, but is not limited to, a game scene, an animation scene, a movie scene, and the like. The current scene may be, but is not limited to, a 2D scene, a 3D scene, or a higher dimensional scene, etc.
Alternatively, in the present embodiment, the virtual objects in the current scene may include, but are not limited to, various models in the scene, such as: mountains, water, trees, buildings, facilities, characters, props, etc. in a game scene.
Alternatively, in this embodiment, the Entity object corresponding to each virtual object to be clustered may include, but is not limited to, an Entity object, and so on.
Alternatively, in the present embodiment, voxelization (Voxelization) is the conversion of a geometric representation of an object into a voxel representation closest to the object, resulting in a voxel dataset that includes not only surface information of the model, but also internal properties of the model. FIG. 3 is a schematic diagram of optional voxelization of a virtual object according to an embodiment of the application, where, as shown in FIG. 3, a voxelization is performed on a wall surface of an L-shaped grid in a game scene to obtain a voxelized model, after voxelization, the voxels can more accurately reflect the volume of the grid, and the number of small squares in the voxelized model can represent the volume of the grid.
As an optional embodiment, voxelizing the virtual objects to be clustered in the current scene, to obtain the entity object corresponding to each virtual object includes:
s11, acquiring a virtual object allowing clustering from the virtual objects included in the current scene as the virtual object to be clustered;
s12, voxelizing the virtual object to be clustered to obtain voxel data of the virtual object to be clustered;
s13, generating an initial entity object corresponding to the virtual object to be clustered;
and S14, storing the voxel data of the virtual object to be clustered into the initial entity object to obtain the entity object corresponding to the virtual object to be clustered.
Alternatively, in the present embodiment, the virtual objects to be clustered may be, but are not limited to, virtual objects in a scene that allow clustering. The virtual objects to be clustered may be automatically identified or manually selected by an operator.
Alternatively, in this embodiment, each voxel data of the virtual object to be clustered may store only an integer number of world coordinates, and a set of voxel data of the virtual object to be clustered, i.e., a set of world coordinates, represents the world region where the grid is located.
Alternatively, in the present embodiment, all virtual objects to be clustered are voxelized, and then saved in one Entity object newly generated.
In the technical solution provided in step S204, the target threshold range may be, but not limited to, a clustering constraint, and the target threshold range may be, but not limited to, an upper limit, that is, the volume of the bounding volume after merging the entity objects included in each initial fusion object cannot be excessively large. The target threshold range may also be, but is not limited to, a threshold range, i.e., the volume of the bounding volume after merging of the entity objects included in each initial fusion object cannot be too large or too small.
Optionally, in this embodiment, the initial fusion object after merging the physical objects includes a set of virtual objects, and the smallest cube that completely wraps the set of virtual objects is the merged bounding volume. The volume of the combined enclosure is required to meet the limitation requirements, such as: there is an upper limit to the requirement that the volume of the combined enclosure not be excessive.
As an optional embodiment, merging the entity objects to obtain an initial fusion object includes:
s21, traversing all the entity objects, and judging whether the volume of the bounding volume after combining any two entity objects falls into the target threshold range;
S22, creating a fusion object for each pair of entity objects of which the volumes of the merged bounding volumes fall into the target threshold range;
s23, storing each pair of entity objects into one fusion object to obtain the initial fusion object.
Alternatively, in the present embodiment, the fusion object may include, but is not limited to, a MergeData object.
Optionally, in this embodiment, any two Entity objects are tested, to determine whether the bounding volumes after merging thereof meet the size constraint of the automatic clustering (i.e., the target threshold range described above), and for each pair of Entity objects that meet the condition, a MergeData object is created, and the pair of Entity objects is stored in the MergeData object, so as to obtain the initial MergeData object.
In an optional embodiment, a process of generating an initial MergeData object according to a virtual object to be clustered is provided, fig. 4 is a schematic diagram of a process of generating an initial MergeData object according to an optional embodiment of the present application, as shown in fig. 4, a virtual object capable of being clustered in a scene is collected as a virtual object to be clustered, voxelized processing is performed on each virtual object, an Entity object is generated for each virtual object, voxel data of the virtual object is recorded in the Entity object, all the Entity objects are traversed, merging judgment is performed on any two Entity objects, whether a bounding volume after merging meets a constraint condition (for example, whether the merging condition is smaller than a preset value) is determined, if not, the MergeData object is not generated, if not, the two Entity objects are recorded in one MergeData object, the MergeData object is obtained, voxelized data of the MergeData object is recorded in the object, and the number of voxels of the voxel data is calculated, so as to obtain the initial MergeData object.
In the technical solution provided in step S206, the merging process may, but is not limited to, adopt a data structure diagram manner, for example: depth-first traversal, breadth-first traversal, etc. And through the traversal process of the data structure diagram, the nodes which can reach and meet the size limit after merging are put together, so that the target fusion object is obtained.
Alternatively, in the present embodiment, the voxel data of the initial fusion object may include, but is not limited to, the number of voxels of the initial fusion object (such as the size of the number of voxels of the MergeData object calculated in fig. 4), the voxel distance between the entity objects in the initial fusion object, and so on.
As an optional embodiment, merging the initial fusion object according to voxel data of the initial fusion object, to obtain a target fusion object includes:
s31, merging the initial fusion objects according to voxel data of the initial fusion objects to obtain candidate fusion objects, wherein the candidate fusion objects do not comprise public entity objects, and the volume of a bounding volume corresponding to each candidate fusion object falls into the target threshold range;
S32, screening fusion objects meeting target conditions from the candidate fusion objects to serve as target fusion objects.
Optionally, in this embodiment, the candidate fusion objects obtained after merging do not include a common entity object, and the volume of the bounding volume corresponding to each candidate fusion object falls within a target threshold range, that is, two merging conditions are included in the initial merging process of the initial fusion objects, where the first condition is that the candidate fusion objects obtained after merging do not include a common entity object, and the second condition is that the volume of the bounding volume corresponding to each candidate fusion object after merging is less than a certain threshold and not too large.
Optionally, in this embodiment, after the initial fusion objects are preliminarily combined to obtain the candidate fusion objects, the candidate fusion objects are further screened by using target conditions that need to be met by preset clustering, so as to obtain the target fusion objects.
As an optional embodiment, merging the initial fusion object according to voxel data of the initial fusion object, to obtain a candidate fusion object includes:
s41, judging whether two entity objects included in the initial fusion object are intersected or not according to voxel data of the initial fusion object;
S42, adding the initial fusion objects intersected by the entity objects into a first list, and adding the initial fusion objects not intersected by the entity objects into a second list;
s43, sorting the first list and the second list from large to small according to the number of voxels of the initial fusion object respectively to obtain a third list corresponding to the first list and a fourth list corresponding to the second list;
s44, merging the initial fusion objects comprising the common entity objects in the third list until the initial fusion objects comprising the common entity objects do not exist, and obtaining a fifth list, wherein the volume of the bounding volume corresponding to the initial fusion objects included in the fifth list falls into the target threshold range;
s45, adding the fourth list to the fifth list to obtain a sixth list;
s46, merging the initial fusion objects comprising the common entity objects in the sixth list until the initial fusion objects comprising the common entity objects do not exist, and obtaining a seventh list, wherein the volume of a bounding volume corresponding to the initial fusion objects included in the seventh list falls into the target threshold range;
S47, determining the initial fusion object which is effective in the seventh list and comprises entity objects with the number larger than 1 as the candidate fusion object.
Alternatively, in this embodiment, the first list may be, but not limited to, a contact list, the second list may be, but not limited to, a NoContactList, a third list is a list obtained by sorting the contact list, and a fourth list is a list obtained by sorting the NoContactList. And merging the ordered ContactList list to obtain a fifth list. Adding the NoContactList list to the ContactList list after the merging operation to obtain a sixth list. And merging the sixth list again to obtain a seventh list, wherein the initial fusion object meeting the preset condition in the seventh list is the candidate fusion object.
Optionally, in this embodiment, for the created MergeData object, it is determined whether two of the MergeData objects included in the created MergeData object intersect, the MergeData object intersected by the Entity object is stored in the contact list, and the MergeData object not intersected by the Entity object is stored in the NoContactList.
Optionally, in this embodiment, according to the number of voxels contained in the two Entity objects in each initial fusion object, the ContactList and the NoContactList are respectively ordered from large to small, so as to obtain the ordered ContactList and NoContactList as the third list and the fourth list.
Alternatively, in this embodiment, after the merging operation is performed on the ContactList, the NoContactList is added to the back of the ContactList after the merging operation, and the merging operation is continued.
Optionally, in this embodiment, after the merging operation is performed twice, an initial MergeData object that is valid and includes a number of entity objects greater than 1 is screened from the seventh list as a candidate fusion object. Valid objects may include, but are not limited to: all objects included in the list, objects marked as valid markers in the list, or objects not marked as invalid markers in the list.
As an optional embodiment, determining whether two entity objects included in the initial fusion object intersect according to voxel data of the initial fusion object includes:
s51, judging whether coordinates stored by voxels in two entity objects included in the initial fusion object comprise the same coordinates or not;
s52, determining that the two entity objects included in the initial fusion object intersect when the coordinates stored in the voxels in the two entity objects included in the initial fusion object include the same coordinates.
Alternatively, in the present embodiment, the determination as to whether the physical objects intersect may be performed in a manner of determining whether the same coordinates are stored in the two physical objects, but is not limited to. Such as: each Entity object stores a set of world coordinates through voxels, and if the coordinates stored by the voxels in one Entity object are the same as the coordinates stored by the voxels in the other Entity object in the voxel data stored by the two Entity objects of the original MergeData object, the two Entity objects are considered to intersect.
Alternatively, in the present embodiment, in the case where the same coordinates are not included in the coordinates stored in the voxels in the two physical objects included in the initial fusion object, it is determined that the two physical objects included in the initial fusion object do not intersect. The case where the same coordinates are not included may be taken as a way of determining that the two physical objects do not intersect, or may be determined that the two physical objects do not intersect in other ways, which is not limited in this embodiment.
As an alternative embodiment, merging the initial fusion objects including the common entity object in the third list includes:
s61, acquiring two initial fusion objects comprising a common entity object from the third list;
S62, judging whether the volume of the total surrounding body corresponding to the two initial fusion objects falls into the target threshold range or not;
s63, adding non-public entity objects included in the initial fusion objects with the rear sequence to the initial fusion objects with the front sequence under the condition that the volume of the total surrounding body corresponding to the two initial fusion objects falls into the target threshold range, and deleting or marking the initial fusion objects with the rear sequence as target tags for indicating invalidation of the objects;
and S64, deleting the public entity objects included in the initial fusion objects after the sorting under the condition that the volumes of the total surrounding bodies corresponding to the two initial fusion objects do not fall into the target threshold range.
Optionally, in this embodiment, the above procedure traverses the initial fusion object in the third list in a depth-first traversal manner. The initial fused objects in the third list may also be traversed in breadth-first traversal.
Optionally, in this embodiment, for any two initial MergeData objects in the third list, if a common Entity object is included, for example, one of which includes two entities a and B, and one of which includes two entities B and C, the two initial MergeData objects need to be merged.
Alternatively, in this embodiment, the initial fused object to be fused later in the order may be deleted to indicate that it is invalid, or the target tag may be marked to indicate that it is invalid. If the deletion indicates that the object is invalid, all the initial fusion objects in the seventh list can be considered as valid initial fusion objects. If the object is not valid by marking the target label, the initial fusion object not marked with the target label in the seventh list may be regarded as a valid initial fusion object.
Alternatively, in this embodiment, the target tag may be, but is not limited to, a unvalidled tag. The initial fusion object marked Unvalidated is the fusion object which is already merged into other initial fusion objects.
Alternatively, in this embodiment, the merging rules of the above two merging processes may include, but are not limited to: and (3) according to the rule I, merging the initial fusion objects sequenced at the back into the initial fusion objects sequenced at the front as much as possible. And rule II, when two MergeData objects comprising the public Entity object no longer exist in the list, merging is finished.
Optionally, in this embodiment, at merging, all elements in the initial MergeData object with the later index are merged into the initial MergeData object with the earlier index. If the combined initial MergeData object meets the clustering condition, marking the combined initial MergeData object as Unvalidated, and if the combined initial MergeData object does not meet the clustering condition, deleting the common Entity object from the initial MergeData object with the rear index. If the first initial MergeData object comprises two Entity objects A and B and the second initial MergeData object comprises two Entity objects B and C, merging C into the first initial MergeData object to obtain an initial MergeData object comprising three Entity objects A, B and C.
If the bounding volume formed by the three entities a, B, and C exceeds the volume limit of the bounding volume by clustering after C merges into the first initial MergeData object, C cannot be merged into the first initial MergeData object, and at this time, the B element is deleted from the second initial MergeData object, which becomes an initial MergeData object including only one Entity object.
If the bounding volume formed by the three entities A, B, C does not exceed the volume limit of the bounding volume by clustering after C merges into the first initial MergeData object, then C can be merged into the first initial MergeData object, at which time the second initial MergeData object is marked as Unvalided.
Optionally, in this embodiment, the process of merging the initial fusion objects including the common entity object in the sixth list until there is no initial fusion object including the common entity object to obtain the seventh list is similar to the process of merging the third list, which is not described herein.
As an alternative embodiment, screening the fusion object satisfying the target condition from the candidate fusion objects as the target fusion object includes:
S71, determining whether each candidate fusion object meets the target condition;
s72, eliminating fusion objects which do not meet the target conditions from the candidate fusion objects to obtain the target fusion objects;
wherein the target condition includes at least one of:
the number of voxels of each entity object in the fusion object is greater than the number of target voxels;
the number of entity objects included in the fusion object is greater than the number of target entity objects;
voxel distances between the entity objects included in the fusion object are smaller than the target distance.
Alternatively, in the present embodiment, the target fusion object may be screened using one or more of the restriction conditions of the number of voxels, the restriction of the number of solid objects, and the restriction of the voxel distance as the target condition.
Alternatively, in the present embodiment, taking the limitation of the voxel distance as an example, the voxel distance refers to the distance between two objects estimated by voxels. Assuming that two virtual objects are provided, the two sets of corresponding voxel data are respectively A and B, the distance between each voxel in A and each voxel in B is compared, and the nearest one in all the distances can be regarded as the voxel distance between the two objects. If the voxel distance of any two virtual objects in the cluster is larger than a threshold value after the clustering is finished, the result does not meet the clustering condition, and the clustering can be directly removed from the clustering result.
Optionally, in this embodiment, a threshold may be set for the number of entity objects in each candidate fusion object, and when the candidate fusion objects are screened finally, candidate fusion objects that do not reach the threshold are removed.
In the technical solution provided in step S208, generating the clustering result according to the target fusion object includes:
s81, generating an initial clustering object for each target fusion object;
s82, storing the entity object included in each target fusion object into the initial clustering object to obtain a target clustering object;
s83, adding the target clustering object into a clustering list to obtain the clustering result.
Optionally, in this embodiment, the clustering object may be, but not limited to, a Cluster object, after the target fusion object is obtained, a Cluster object is generated for each target fusion object, entity objects in the target fusion object are saved in the Cluster object, and all the generated Cluster objects are saved in a Cluster list, where the Cluster list is a result of automatic clustering.
In an alternative embodiment, a process of generating clusters is provided, fig. 5 is a schematic diagram of a process of generating clusters according to an alternative embodiment of the present application, and as shown in fig. 5, an intersection judgment is performed on each initial MergeData object, where the judgment process may be: judging whether voxels of two Entity objects included in each initial MergeData object comprise the same coordinates, if so, adding the initial MergeData object into a ContactList, and if not, adding the initial MergeData object into a NoContactList. And after the process cycle of the intersection judgment is finished, sorting the initial fusion objects in the obtained two groups of Lists from large to small according to the number of voxels. After ordering, the elements in the ContactList are subjected to depth-first merging, and the NoContactList is not considered in the merging process. The depth-first merging is performed again on the merged results, taking into account the elements in NoContactList. Screening the combined List, converting the MergeData objects meeting the element number limit into Cluster objects, wherein the List formed by all the Cluster objects is a clustering result.
It should be noted that, for simplicity of description, the foregoing method embodiments are all expressed as a series of action combinations, but it should be understood by those skilled in the art that the present application is not limited by the order of actions described, as some steps may be performed in other order or simultaneously in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required in the present application.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus the necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), comprising several instructions for causing an electronic device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method described in the embodiments of the present application.
According to another aspect of the embodiments of the present application, there is also provided a virtual object clustering apparatus for implementing the virtual object clustering method. FIG. 6 is a schematic diagram of an alternative virtual object clustering apparatus, as shown in FIG. 6, according to an embodiment of the present application, which may include:
the voxelization module 62 is configured to voxeize virtual objects to be clustered in a current scene to obtain entity objects corresponding to each virtual object to be clustered;
the merging module 64 is configured to merge the entity objects to obtain initial fusion objects, where a volume of a bounding volume after merging the entity objects included in each initial fusion object falls within a target threshold range;
the clustering module 66 is configured to combine the initial fusion objects according to voxel data of the initial fusion objects to obtain target fusion objects, where volumes of bounding volumes corresponding to a plurality of entity objects included in the target fusion objects fall within the target threshold range; generating a clustering result according to the target fusion object.
It should be noted that, the voxelization module 62 in this embodiment may be used to perform step S202 in the embodiment of the present application, the merging module 64 in this embodiment may be used to perform step S204 in the embodiment of the present application, and the clustering module 66 in this embodiment may be used to perform step S206 and step S208 in the embodiment of the present application.
It should be noted that the above modules are the same as examples and application scenarios implemented by the corresponding steps, but are not limited to what is disclosed in the above embodiments. It should be noted that the above modules may be implemented in software or hardware as a part of the apparatus in the hardware environment shown in fig. 1.
Through the module, the virtual objects to be clustered in the current scene are subjected to voxel processing to obtain the entity objects, the entity objects are combined to obtain the initial fusion objects meeting the clustering condition, the initial fusion objects are clustered to obtain the target fusion objects as the final clustering result, and after the virtual objects are subjected to voxel processing, the voxels can more accurately reflect the volumes of grids, so that the obtained bounding volumes of the objects more conform to the real shape of the model, the purpose of reducing the error between the bounding volumes and the displayed grid shape of the virtual objects is achieved, the technical effect of improving the accuracy of clustering the virtual objects is achieved, and the technical problem of lower accuracy of clustering the virtual objects is solved.
As an alternative embodiment, the voxelization module comprises:
An obtaining unit, configured to obtain a virtual object that allows clustering from virtual objects included in the current scene as the virtual object to be clustered;
the voxelization unit is used for voxelizing the virtual object to be clustered to obtain the voxel data of the virtual object to be clustered;
the first generation unit is used for generating an initial entity object corresponding to the virtual object to be clustered;
and the first storage unit is used for storing the voxel data of the virtual object to be clustered into the initial entity object to obtain the entity object corresponding to the virtual object to be clustered.
As an alternative embodiment, the merging module includes:
the judging unit is used for traversing all the entity objects and judging whether the volume of the bounding volume after the combination of any two entity objects falls into the target threshold range or not;
the creation unit is used for creating a fusion object for each pair of entity objects of which the volumes of the combined bounding volumes fall into the target threshold range;
and the second storage unit is used for storing each pair of entity objects into one fusion object to obtain the initial fusion object.
As an alternative embodiment, the clustering module includes:
The first merging unit is used for merging the initial fusion objects according to voxel data of the initial fusion objects to obtain candidate fusion objects, wherein the candidate fusion objects do not comprise public entity objects, and the volume of a bounding volume corresponding to each candidate fusion object falls into the target threshold range;
and the screening unit is used for screening fusion objects meeting target conditions from the candidate fusion objects to serve as target fusion objects.
As an alternative embodiment, the first merging unit is configured to:
judging whether two entity objects included in the initial fusion object are intersected or not according to voxel data of the initial fusion object;
adding the initial fusion object intersected by the entity object to a first list, and adding the initial fusion object not intersected by the entity object to a second list;
sequencing the first list and the second list from large to small according to the number of voxels of the initial fusion object respectively to obtain a third list corresponding to the first list and a fourth list corresponding to the second list;
merging the initial fusion objects comprising the common entity objects in the third list until the initial fusion objects comprising the common entity objects do not exist, and obtaining a fifth list, wherein the volume of a bounding volume corresponding to the initial fusion objects included in the fifth list falls into the target threshold range;
Adding the fourth list to the fifth list to obtain a sixth list;
merging the initial fusion objects comprising the common entity objects in the sixth list until the initial fusion objects comprising the common entity objects do not exist, and obtaining a seventh list, wherein the volume of a bounding volume corresponding to the initial fusion objects included in the seventh list falls into the target threshold range;
and determining the initial fusion object which is valid in the seventh list and comprises entity objects with the number larger than 1 as the candidate fusion object.
As an alternative embodiment, the first merging unit is configured to:
judging whether coordinates stored by voxels in two entity objects included in the initial fusion object comprise the same coordinates or not;
and determining that the two entity objects included in the initial fusion object intersect under the condition that the coordinates stored by the voxels in the two entity objects included in the initial fusion object comprise the same coordinates.
As an alternative embodiment, the first merging unit is configured to:
acquiring two initial fusion objects comprising a common entity object from the third list;
Judging whether the volumes of the total surrounding bodies corresponding to the two initial fusion objects fall into the target threshold range or not;
under the condition that the volume of the total surrounding body corresponding to the two initial fusion objects falls into the target threshold range, adding non-public entity objects included in the initial fusion objects with the rear sequence into the initial fusion objects with the front sequence, and deleting or marking the initial fusion objects with the rear sequence as target labels for indicating invalidation of the objects;
and deleting the public entity objects included in the initial fusion objects after the sorting under the condition that the volumes of the total surrounding bodies corresponding to the two initial fusion objects do not fall into the target threshold range.
As an alternative embodiment, the screening unit is configured to:
determining whether each candidate fusion object meets the target condition;
removing fusion objects which do not meet the target conditions from the candidate fusion objects to obtain the target fusion objects;
wherein the target condition includes at least one of:
the number of voxels of each entity object in the fusion object is greater than the number of target voxels;
the number of entity objects included in the fusion object is greater than the number of target entity objects;
Voxel distances between the entity objects included in the fusion object are smaller than the target distance.
As an alternative embodiment, the clustering module includes:
the second generation unit is used for generating an initial clustering object for each target fusion object;
a third storing unit, configured to store the entity object included in each target fusion object to the one initial clustering object, to obtain a target clustering object;
and the adding unit is used for adding the target clustering object into a clustering list to obtain the clustering result.
It should be noted that the above modules are the same as examples and application scenarios implemented by the corresponding steps, but are not limited to what is disclosed in the above embodiments. It should be noted that the above modules may be implemented in software or in hardware as part of the apparatus shown in fig. 1, where the hardware environment includes a network environment.
According to another aspect of the embodiments of the present application, there is also provided an electronic device for implementing the above-mentioned clustering method of virtual objects.
Fig. 7 is a block diagram of an electronic device according to an embodiment of the present application, as shown in fig. 7, the electronic device may include: one or more (only one is shown in the figure) processors 701, memory 703, and transmission means 705, which may further comprise input-output devices 707, as shown in fig. 7.
The memory 703 may be used to store software programs and modules, such as program instructions/modules corresponding to the virtual object clustering method and apparatus in the embodiments of the present application, and the processor 701 executes the software programs and modules stored in the memory 703, thereby performing various functional applications and data processing, that is, implementing the virtual object clustering method described above. The memory 703 may include high speed random access memory, but may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid state memory. In some examples, the memory 703 may further include memory located remotely from the processor 701, which may be connected to the electronic device through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 705 is used for receiving or transmitting data via a network, and may also be used for data transmission between a processor and a memory. Specific examples of the network described above may include wired networks and wireless networks. In one example, the transmission device 705 includes a network adapter (Network Interface Controller, NIC) that may be connected to other network devices and routers via a network cable to communicate with the internet or a local area network. In one example, the transmission device 705 is a Radio Frequency (RF) module for communicating with the internet wirelessly.
Among them, the memory 703 is used to store, in particular, application programs.
The processor 701 may call an application program stored in the memory 703 through the transmission means 705 to perform the steps of:
voxelization of virtual objects to be clustered in a current scene is carried out, and entity objects corresponding to each virtual object to be clustered are obtained;
merging the entity objects to obtain initial fusion objects, wherein the volume of a bounding volume after merging the entity objects included in each initial fusion object falls into a target threshold range;
merging the initial fusion objects according to voxel data of the initial fusion objects to obtain target fusion objects, wherein volumes of bounding volumes corresponding to a plurality of entity objects included in the target fusion objects fall into the target threshold range;
generating a clustering result according to the target fusion object.
By adopting the embodiment of the application, a scheme for clustering virtual objects is provided. The method comprises the steps of carrying out voxelization treatment on virtual objects to be clustered in a current scene to obtain entity objects, merging the entity objects to obtain initial fusion objects meeting clustering conditions, merging the initial fusion objects to obtain target fusion objects, generating clustering results according to the target fusion objects, and then carrying out voxelization treatment on the virtual objects to more accurately reflect the volumes of grids, so that the obtained bounding volumes of the objects more conform to the real shape of the model, the purpose of reducing errors between the bounding volumes and the displayed grid shapes of the virtual objects is achieved, the technical effect of improving the accuracy of clustering the virtual objects is achieved, and the technical problem of lower accuracy of clustering the virtual objects is solved.
Alternatively, specific examples in this embodiment may refer to examples described in the foregoing embodiments, and this embodiment is not described herein.
It will be appreciated by those skilled in the art that the structure shown in fig. 7 is merely illustrative, and the electronic device may be a smart phone (such as an Android phone, an iOS phone, etc.), a tablet computer, a palmtop computer, a mobile internet device (Mobile Internet Devices, MID), a PAD, etc. Fig. 7 is not limited to the structure of the electronic device. For example, the electronic device may also include more or fewer components (e.g., network interfaces, display devices, etc.) than shown in FIG. 7, or have a different configuration than shown in FIG. 7.
Those of ordinary skill in the art will appreciate that all or a portion of the steps in the various methods of the above embodiments may be implemented by a program for instructing an electronic device to execute in conjunction with hardware, the program may be stored on a computer readable storage medium, and the storage medium may include: flash disk, read-Only Memory (ROM), random-access Memory (Random Access Memory, RAM), magnetic or optical disk, and the like.
Embodiments of the present application also provide a storage medium. Alternatively, in the present embodiment, the storage medium described above may be used for executing the program code of the clustering method of virtual objects.
Alternatively, in this embodiment, the storage medium may be located on at least one network device of the plurality of network devices in the network shown in the above embodiment.
Alternatively, in the present embodiment, the storage medium is configured to store program code for performing the steps of:
voxelization of virtual objects to be clustered in a current scene is carried out, and entity objects corresponding to each virtual object to be clustered are obtained;
merging the entity objects to obtain initial fusion objects, wherein the volume of a bounding volume after merging the entity objects included in each initial fusion object falls into a target threshold range;
merging the initial fusion objects according to voxel data of the initial fusion objects to obtain target fusion objects, wherein volumes of bounding volumes corresponding to a plurality of entity objects included in the target fusion objects fall into the target threshold range;
generating a clustering result according to the target fusion object.
Alternatively, specific examples in this embodiment may refer to examples described in the foregoing embodiments, and this embodiment is not described herein.
Alternatively, in the present embodiment, the storage medium may include, but is not limited to: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing embodiment numbers of the present application are merely for describing, and do not represent advantages or disadvantages of the embodiments.
The integrated units in the above embodiments may be stored in the above-described computer-readable storage medium if implemented in the form of software functional units and sold or used as separate products. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions to cause one or more computer devices (which may be personal computers, servers or network devices, etc.) to perform all or part of the steps of the methods described in the various embodiments of the present application.
In the foregoing embodiments of the present application, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In several embodiments provided in the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, such as the division of the units, is merely a logical function division, and may be implemented in another manner, for example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The foregoing is merely a preferred embodiment of the present application and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present application and are intended to be comprehended within the scope of the present application.
Claims (11)
1. A method for clustering virtual objects, comprising:
voxelization of virtual objects to be clustered in a current scene is carried out, and entity objects corresponding to each virtual object to be clustered are obtained;
Merging the entity objects to obtain initial fusion objects, wherein the volume of a bounding volume after merging the entity objects included in each initial fusion object falls into a target threshold range;
merging the initial fusion objects according to voxel data of the initial fusion objects to obtain target fusion objects, wherein volumes of bounding volumes corresponding to a plurality of entity objects included in the target fusion objects fall into the target threshold range;
generating a clustering result according to the target fusion object;
combining the initial fusion object according to the voxel data of the initial fusion object, wherein obtaining a target fusion object comprises: merging the initial fusion objects according to voxel data of the initial fusion objects to obtain candidate fusion objects, wherein the candidate fusion objects do not comprise public entity objects, and the volume of a bounding volume corresponding to each candidate fusion object falls into the target threshold range; and screening fusion objects meeting target conditions from the candidate fusion objects to serve as target fusion objects.
2. The method of claim 1, wherein voxelizing the virtual objects to be clustered in the current scene to obtain the physical object corresponding to each virtual object comprises:
Acquiring a virtual object allowing clustering from the virtual objects included in the current scene as the virtual object to be clustered;
voxelization of the virtual object to be clustered to obtain voxel data of the virtual object to be clustered;
generating an initial entity object corresponding to the virtual object to be clustered;
and storing the voxel data of the virtual object to be clustered into the initial entity object to obtain the entity object corresponding to the virtual object to be clustered.
3. The method of claim 1, wherein merging the physical objects to obtain an initial fusion object comprises:
traversing all the entity objects, and judging whether the volume of the bounding volume after the combination of any two entity objects falls into the target threshold range;
creating a fusion object for each pair of entity objects of which the volumes of the merged bounding volumes fall within the target threshold range;
and storing each pair of entity objects into one fusion object to obtain the initial fusion object.
4. The method of claim 1, wherein merging the initial fusion object from voxel data of the initial fusion object to obtain a candidate fusion object comprises:
Judging whether two entity objects included in the initial fusion object are intersected or not according to voxel data of the initial fusion object;
adding the initial fusion object intersected by the entity object to a first list, and adding the initial fusion object not intersected by the entity object to a second list;
sequencing the first list and the second list from large to small according to the number of voxels of the initial fusion object respectively to obtain a third list corresponding to the first list and a fourth list corresponding to the second list;
merging the initial fusion objects comprising the common entity objects in the third list until the initial fusion objects comprising the common entity objects do not exist, and obtaining a fifth list, wherein the volume of the bounding volumes corresponding to the initial fusion objects included in the fifth list falls into the target threshold range;
adding the fourth list to the fifth list to obtain a sixth list;
merging the initial fusion objects comprising the common entity objects in the sixth list until the initial fusion objects comprising the common entity objects do not exist, and obtaining a seventh list, wherein the volume of the bounding volumes corresponding to the initial fusion objects included in the seventh list falls into the target threshold range;
And determining the initial fusion object which is valid in the seventh list and comprises entity objects with the number larger than 1 as the candidate fusion object.
5. The method of claim 4, wherein determining whether two physical objects included in the initial fusion object intersect based on voxel data of the initial fusion object comprises:
judging whether coordinates stored by voxels in two entity objects included in the initial fusion object comprise the same coordinates or not;
and determining that the two entity objects included in the initial fusion object intersect under the condition that the coordinates stored by the voxels in the two entity objects included in the initial fusion object comprise the same coordinates.
6. The method of claim 4, wherein merging the initial fusion objects in the third list that include common entity objects comprises:
acquiring two initial fusion objects comprising a common entity object from the third list;
judging whether the volumes of the total surrounding bodies corresponding to the two initial fusion objects fall into the target threshold range or not;
under the condition that the volume of the total surrounding body corresponding to the two initial fusion objects falls into the target threshold range, adding non-public entity objects included in the initial fusion objects with the rear sequence into the initial fusion objects with the front sequence, and deleting or marking the initial fusion objects with the rear sequence as target labels for indicating invalidation of the objects;
And deleting the public entity objects included in the initial fusion objects after the sorting under the condition that the volumes of the total surrounding bodies corresponding to the two initial fusion objects do not fall into the target threshold range.
7. The method of claim 1, wherein screening fusion objects satisfying a target condition from the candidate fusion objects as the target fusion objects comprises:
determining whether each candidate fusion object meets the target condition;
removing fusion objects which do not meet the target conditions from the candidate fusion objects to obtain the target fusion objects;
wherein the target condition includes at least one of:
the number of voxels of each entity object in the fusion object is greater than the number of target voxels;
the number of entity objects included in the fusion object is greater than the number of target entity objects;
voxel distances between the entity objects included in the fusion object are smaller than the target distance.
8. The method of claim 1, wherein generating clustering results from the target fusion object comprises:
generating an initial clustering object for each target fusion object;
storing the entity object included in each target fusion object into the initial clustering object to obtain a target clustering object;
And adding the target clustering object into a clustering list to obtain the clustering result.
9. A clustering apparatus for virtual objects, comprising:
the voxelization module is used for voxelizing the virtual objects to be clustered in the current scene to obtain entity objects corresponding to each virtual object to be clustered;
the merging module is used for merging the entity objects to obtain initial fusion objects, wherein the volume of the bounding volumes after the entity objects included in each initial fusion object are merged falls into a target threshold range;
the clustering module is used for merging the initial fusion objects according to voxel data of the initial fusion objects to obtain target fusion objects, wherein the volumes of the bounding volumes corresponding to the plurality of entity objects included in the target fusion objects fall into the target threshold range; generating a clustering result according to the target fusion object;
wherein, the clustering module includes: the first merging unit is used for merging the initial fusion objects according to voxel data of the initial fusion objects to obtain candidate fusion objects, wherein the candidate fusion objects do not comprise public entity objects, and the volume of a bounding volume corresponding to each candidate fusion object falls into the target threshold range; and the screening unit is used for screening fusion objects meeting target conditions from the candidate fusion objects to serve as target fusion objects.
10. A storage medium comprising a stored program, wherein the program when run performs the method of any one of the preceding claims 1 to 8.
11. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor performs the method of any of the preceding claims 1 to 8 by means of the computer program.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110420679.XA CN113134230B (en) | 2021-01-08 | 2021-01-08 | Clustering method and device for virtual objects, storage medium and electronic device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110420679.XA CN113134230B (en) | 2021-01-08 | 2021-01-08 | Clustering method and device for virtual objects, storage medium and electronic device |
CN202110025936.XA CN112337093B (en) | 2021-01-08 | 2021-01-08 | Virtual object clustering method and device, storage medium and electronic device |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110025936.XA Division CN112337093B (en) | 2021-01-08 | 2021-01-08 | Virtual object clustering method and device, storage medium and electronic device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113134230A CN113134230A (en) | 2021-07-20 |
CN113134230B true CN113134230B (en) | 2024-03-22 |
Family
ID=74427924
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110025936.XA Active CN112337093B (en) | 2021-01-08 | 2021-01-08 | Virtual object clustering method and device, storage medium and electronic device |
CN202110420679.XA Active CN113134230B (en) | 2021-01-08 | 2021-01-08 | Clustering method and device for virtual objects, storage medium and electronic device |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110025936.XA Active CN112337093B (en) | 2021-01-08 | 2021-01-08 | Virtual object clustering method and device, storage medium and electronic device |
Country Status (2)
Country | Link |
---|---|
CN (2) | CN112337093B (en) |
WO (1) | WO2022148075A1 (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112337093B (en) * | 2021-01-08 | 2021-05-25 | 成都完美时空网络技术有限公司 | Virtual object clustering method and device, storage medium and electronic device |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2014125502A2 (en) * | 2013-02-18 | 2014-08-21 | Tata Consultancy Services Limited | Segmenting objects in multimedia data |
CN105247572A (en) * | 2012-10-05 | 2016-01-13 | 奥利亚医疗公司 | System and method for estimating a quantity of interest in a kinematic system by contrast agent tomography |
CN106558092A (en) * | 2016-11-16 | 2017-04-05 | 北京航空航天大学 | A kind of multiple light courcess scene accelerated drafting method based on the multi-direction voxelization of scene |
CN110325991A (en) * | 2016-09-19 | 2019-10-11 | 拜奥莫德克斯公司 | Method and apparatus for generating the 3D model of object |
CN110390706A (en) * | 2018-04-13 | 2019-10-29 | 北京京东尚科信息技术有限公司 | A kind of method and apparatus of object detection |
CN111429543A (en) * | 2020-02-28 | 2020-07-17 | 苏州叠纸网络科技股份有限公司 | Material generation method and device, electronic equipment and medium |
CN112090084A (en) * | 2020-11-23 | 2020-12-18 | 成都完美时空网络技术有限公司 | Object rendering method and device, storage medium and electronic equipment |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1209618A1 (en) * | 2000-11-28 | 2002-05-29 | TeraRecon, Inc., A Delaware Corporation | Volume rendering pipeline |
US7023433B2 (en) * | 2002-10-14 | 2006-04-04 | Chung Yuan Christian University | Computer-implemented method for constructing and manipulating a three-dimensional model of an object volume, and voxels used therein |
CN102609990B (en) * | 2012-01-05 | 2015-04-22 | 中国海洋大学 | Massive-scene gradually-updating algorithm facing complex three dimensional CAD (Computer-Aided Design) model |
US9964499B2 (en) * | 2014-11-04 | 2018-05-08 | Toshiba Medical Systems Corporation | Method of, and apparatus for, material classification in multi-energy image data |
US10473788B2 (en) * | 2017-12-13 | 2019-11-12 | Luminar Technologies, Inc. | Adjusting area of focus of vehicle sensors by controlling spatial distributions of scan lines |
CN108389202B (en) * | 2018-03-16 | 2020-02-14 | 青岛海信医疗设备股份有限公司 | Volume calculation method and device of three-dimensional virtual organ, storage medium and equipment |
CN108921945B (en) * | 2018-06-25 | 2022-11-04 | 中国石油大学(华东) | A Pore Network Model Construction Method Combining Center Axis and Solid Model |
US11217006B2 (en) * | 2018-10-29 | 2022-01-04 | Verizon Patent And Licensing Inc. | Methods and systems for performing 3D simulation based on a 2D video image |
CN110135599B (en) * | 2019-05-15 | 2020-09-01 | 南京林业大学 | Unmanned aerial vehicle electric power inspection point cloud intelligent processing and analyzing service platform |
CN110935169B (en) * | 2019-11-22 | 2021-09-14 | 腾讯科技(深圳)有限公司 | Control method of virtual object, information display method, device, equipment and medium |
CN111681274A (en) * | 2020-08-11 | 2020-09-18 | 成都艾尔帕思科技有限公司 | 3D human skeleton recognition and extraction method based on depth camera point cloud data |
CN112070909B (en) * | 2020-09-02 | 2024-06-11 | 中国石油工程建设有限公司 | Engineering three-dimensional model LOD output method based on 3D Tiles |
CN112337093B (en) * | 2021-01-08 | 2021-05-25 | 成都完美时空网络技术有限公司 | Virtual object clustering method and device, storage medium and electronic device |
-
2021
- 2021-01-08 CN CN202110025936.XA patent/CN112337093B/en active Active
- 2021-01-08 CN CN202110420679.XA patent/CN113134230B/en active Active
- 2021-09-30 WO PCT/CN2021/122146 patent/WO2022148075A1/en active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105247572A (en) * | 2012-10-05 | 2016-01-13 | 奥利亚医疗公司 | System and method for estimating a quantity of interest in a kinematic system by contrast agent tomography |
WO2014125502A2 (en) * | 2013-02-18 | 2014-08-21 | Tata Consultancy Services Limited | Segmenting objects in multimedia data |
CN110325991A (en) * | 2016-09-19 | 2019-10-11 | 拜奥莫德克斯公司 | Method and apparatus for generating the 3D model of object |
CN106558092A (en) * | 2016-11-16 | 2017-04-05 | 北京航空航天大学 | A kind of multiple light courcess scene accelerated drafting method based on the multi-direction voxelization of scene |
CN110390706A (en) * | 2018-04-13 | 2019-10-29 | 北京京东尚科信息技术有限公司 | A kind of method and apparatus of object detection |
CN111429543A (en) * | 2020-02-28 | 2020-07-17 | 苏州叠纸网络科技股份有限公司 | Material generation method and device, electronic equipment and medium |
CN112090084A (en) * | 2020-11-23 | 2020-12-18 | 成都完美时空网络技术有限公司 | Object rendering method and device, storage medium and electronic equipment |
Non-Patent Citations (3)
Title |
---|
BLPNet: An End-to-End Model Towards Voxelization Free 3D Object Detection;Zhihao Cui 等;2020 Joint 9th International Conference on Informatics, Electronics & Vision (ICIEV) and 2020 4th International Conference on Imaging, Vision & Pattern Recognition (icIVPR);全文 * |
实体网格模型的变分层次有向包围盒构建;王锐等;软件学报;全文 * |
李孟歆;郑岱.一种基于簇的多视角立体改进算法.应用科技.(02),全文. * |
Also Published As
Publication number | Publication date |
---|---|
CN113134230A (en) | 2021-07-20 |
CN112337093B (en) | 2021-05-25 |
CN112337093A (en) | 2021-02-09 |
WO2022148075A1 (en) | 2022-07-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109523621B (en) | Object loading method and device, storage medium and electronic device | |
CN109737974A (en) | A 3D navigation semantic map update method, device and device | |
CN111467806B (en) | Method, device, medium and electronic equipment for generating resources in game scene | |
CN111369681A (en) | Three-dimensional model reconstruction method, device, equipment and storage medium | |
CN106157354B (en) | A kind of three-dimensional scenic switching method and system | |
JP6864753B2 (en) | Object movement method and device, storage medium, electronic device | |
CN112973127A (en) | Game 3D scene editing method and device | |
CN113134230B (en) | Clustering method and device for virtual objects, storage medium and electronic device | |
CN113989443A (en) | Virtual face image reconstruction method and related device | |
CN109035423A (en) | A kind of method for partitioning storey and device of the virtual three-dimensional model in house | |
US10376796B2 (en) | Message processing method and terminal device | |
CN114917590B (en) | Virtual reality game system | |
CN115115752B (en) | Deformation prediction method and device for virtual clothing, storage medium and electronic equipment | |
Yu et al. | Saliency computation and simplification of point cloud data | |
CN117692611A (en) | Security image transmission method and system based on 5G | |
CN117078857A (en) | Reconstruction method and device of three-dimensional model, electronic equipment and computer readable medium | |
CN116912817A (en) | Three-dimensional scene model splitting method and device, electronic equipment and storage medium | |
CN113850888B (en) | Image processing method, device, electronic equipment and storage medium | |
CN110111427A (en) | Migration route automatic generation method and device in a kind of house virtual three-dimensional space | |
CN115965736A (en) | Image processing method, device, equipment and storage medium | |
CN111488476B (en) | Image pushing method, model training method and corresponding devices | |
CN114549777A (en) | 3D vector grid generation method and device | |
CN112435322A (en) | Rendering method, device and equipment of 3D model and storage medium | |
WO2024156209A1 (en) | Security boundary generation method and apparatus, device, storage medium and program product | |
CN117011461B (en) | Model modeling rendering processing method based on AI technology and edge calculation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |