[go: up one dir, main page]

CN115499635B - Data compression processing method and device - Google Patents

Data compression processing method and device Download PDF

Info

Publication number
CN115499635B
CN115499635B CN202211148120.7A CN202211148120A CN115499635B CN 115499635 B CN115499635 B CN 115499635B CN 202211148120 A CN202211148120 A CN 202211148120A CN 115499635 B CN115499635 B CN 115499635B
Authority
CN
China
Prior art keywords
compression
data
model
visual
object data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211148120.7A
Other languages
Chinese (zh)
Other versions
CN115499635A (en
Inventor
曹佳炯
丁菁汀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN202211148120.7A priority Critical patent/CN115499635B/en
Publication of CN115499635A publication Critical patent/CN115499635A/en
Application granted granted Critical
Publication of CN115499635B publication Critical patent/CN115499635B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • G06V10/763Non-hierarchical techniques, e.g. based on statistics of modelling distributions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/161Encoding, multiplexing or demultiplexing different image signal components
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/149Data rate or code amount at the encoder output by estimating the code amount by means of a model, e.g. mathematical model or statistical model
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Signal Processing (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Algebra (AREA)
  • Mathematical Analysis (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the specification provides a data compression processing method and device, wherein the data compression processing method comprises the following steps: inputting a virtual data set corresponding to a virtual object in a virtual world into an extraction model to extract object data, and obtaining object data of the virtual object; performing visual feature recognition on the virtual object based on the object data to obtain visual features of the virtual object; and reading the compression model corresponding to the visual characteristic, and inputting the object data into the compression model for data compression processing to obtain compressed data.

Description

Data compression processing method and device
Technical Field
The present document relates to the field of virtualization technologies, and in particular, to a data compression processing method and device.
Background
The virtual world provides a simulation of the real world and can even provide scenes that are difficult to implement in the real world, so the virtual world is increasingly applied to various scenes. Because the virtual images and various virtual articles in the virtual world are displayed in the form of three-dimensional data, more storage space is required, and more bandwidth and longer transmission time are required in the process of transmitting data corresponding to the virtual objects in the virtual world.
Disclosure of Invention
One or more embodiments of the present specification provide a data compression processing method. The data compression processing method comprises the following steps: and inputting a virtual data set corresponding to the virtual object in the virtual world into an extraction model to extract object data, and obtaining the object data of the virtual object. And carrying out visual feature recognition on the virtual object based on the object data to obtain the visual features of the virtual object. And reading the compression model corresponding to the visual characteristic, and inputting the object data into the compression model for data compression processing to obtain compressed data.
One or more embodiments of the present specification provide a data compression processing apparatus including: and the data extraction module is configured to input a virtual data set corresponding to a virtual object in the virtual world into the extraction model to extract object data and obtain object data of the virtual object. And the visual characteristic recognition module is configured to perform visual characteristic recognition on the virtual object based on the object data to obtain the visual characteristic of the virtual object. And the data compression module is configured to read the compression model corresponding to the visual characteristic, input the object data into the compression model and perform data compression processing to obtain compressed data.
One or more embodiments of the present specification provide a data compression processing apparatus including: a processor; and a memory configured to store computer-executable instructions that, when executed, cause the processor to: and inputting a virtual data set corresponding to the virtual object in the virtual world into an extraction model to extract object data, and obtaining the object data of the virtual object. And carrying out visual feature recognition on the virtual object based on the object data to obtain the visual features of the virtual object. And reading the compression model corresponding to the visual characteristic, and inputting the object data into the compression model for data compression processing to obtain compressed data.
One or more embodiments of the present specification provide a storage medium storing computer-executable instructions that, when executed by a processor, implement the following: and inputting a virtual data set corresponding to the virtual object in the virtual world into an extraction model to extract object data, and obtaining the object data of the virtual object. And carrying out visual feature recognition on the virtual object based on the object data to obtain the visual features of the virtual object. And reading the compression model corresponding to the visual characteristic, and inputting the object data into the compression model for data compression processing to obtain compressed data.
Drawings
For a clearer description of one or more embodiments of the present description or of the solutions of the prior art, the drawings that are needed in the description of the embodiments or of the prior art will be briefly described below, it being obvious that the drawings in the description that follow are only some of the embodiments described in the present description, from which other drawings can be obtained, without inventive faculty, for a person skilled in the art;
FIG. 1 is a process flow diagram of a data compression processing method according to one or more embodiments of the present disclosure;
FIG. 2 is a flow chart illustrating a method of processing data compression for a virtual compression scene according to one or more embodiments of the present disclosure;
FIG. 3 is a schematic diagram of a data compression processing apparatus according to one or more embodiments of the present disclosure;
fig. 4 is a schematic structural diagram of a data compression processing apparatus according to one or more embodiments of the present disclosure.
Detailed Description
In order to enable a person skilled in the art to better understand the technical solutions in one or more embodiments of the present specification, the technical solutions in one or more embodiments of the present specification will be clearly and completely described below with reference to the drawings in one or more embodiments of the present specification, and it is obvious that the described embodiments are only some embodiments of the present specification, not all embodiments. All other embodiments, which can be made by one or more embodiments of the present disclosure without inventive effort, are intended to be within the scope of the present disclosure.
The embodiment of a data compression processing method is provided in the specification:
in practical application, in the process of performing data compression on three-dimensional data, a lossy compression mode based on a plane and a lossless compression mode based on the three-dimensional data are generally adopted, the lossy compression mode based on the plane is used for performing multi-angle two-dimensional sampling on the three-dimensional data, then performing lossy compression on the two-dimensional data obtained by sampling, and recovering the data to a three-dimensional space after compression, wherein the data quality is poor; based on lossless compression of three-dimensional data, the compression degree is low, which leads to the fact that a large amount of storage and transmission bandwidth is occupied after data compression;
Based on this, the data compression processing method provided in this embodiment performs visual feature recognition according to the object data of the virtual object in the virtual world, after the visual feature type of the virtual object is obtained, the object data is input into the compression model corresponding to the visual feature type to perform data compression processing, and compressed data is output, so that the data compression processing is performed on the object data through the compression model corresponding to the visual feature type of the pre-trained object data, and classification compression of the virtual object is achieved, that is, the virtual object with different visual features is compressed by using a single compression model, so that a better compression effect can be achieved in terms of data precision and volume of the virtual object, and data compression quality is improved.
Referring to fig. 1, the data compression processing method provided in the present embodiment specifically includes steps S102 to S106.
Step S102, inputting a virtual data set corresponding to a virtual object in a virtual world into an extraction model to extract object data, and obtaining object data of the virtual object.
The virtual world refers to a virtual simulated world which is realized based on decentralization cooperation and has an open economic system; in the virtual world, the decentralized transaction is performed by generating a heterogeneous identifier, and ownership of the virtual asset is acquired through the transaction. Specifically, users in the real world may access the virtual world through an access device to conduct de-centralized transactions and other actions in the virtual world; wherein the other behavior includes a perception of the virtual object. The access device is configured to access the Virtual world, and may be a VR (Virtual Reality) device, an AR (Augmented Reality) device, or the like connected to the Virtual world, such as a head-mounted VR device connected to the Virtual world. Optionally, the virtual world is subjected to a decentralization transaction by generating a non-homogeneous identifier, and the ownership of the virtual asset is occupied through the transaction.
The virtual object comprises an object for performing image display in the virtual world; for example, an avatar representing a user's avatar in the virtual world, and an object constituting the virtual environment in the virtual world or an article configured in the virtual environment; for example stones, trees in the virtual world or buildings in the virtual world. Optionally, the virtual object includes an object in the virtual world where a de-centering transaction can be performed and configured with a non-homogenous identification.
The virtual data set comprises a data set composed of data representing virtual objects in a virtual world; the data constituting the virtual data set may be multidimensional data (e.g., three-dimensional data) or point cloud data. The object data comprises data representing the virtual object in the virtual data set; for example, foreground data of a virtual object.
Because the virtual data set of the virtual object may include data of other non-virtual objects such as environment data where the virtual object is located, in order to improve and avoid that compression efficiency is low and compression effect of the virtual object is affected due to compression of all data, in this embodiment, the virtual data set corresponding to the virtual object is acquired in the virtual world, and in the process of data compression of the virtual object, object data extraction is performed on the virtual data set corresponding to the virtual object in the virtual world, so as to obtain object data of the virtual object.
In this embodiment, inputting a virtual data set corresponding to a virtual object in a virtual world into an extraction model to extract object data, and obtaining virtual object data includes inputting the virtual data set into a foreground-background recognition model to perform foreground-background recognition, and taking the foreground data output by the foreground-background recognition model as object data of the virtual object.
In addition, the object data of the virtual object obtained by inputting the virtual data set corresponding to the virtual object in the virtual world into the extraction model may be replaced by the object data of the virtual object obtained by extracting the object data of the virtual data set corresponding to the virtual object in the virtual world, so as to obtain the object data of the virtual object, and the new implementation manner is formed with other processing steps provided in the embodiment.
And step S104, carrying out visual feature recognition on the virtual object based on the object data to obtain the visual features of the virtual object.
Visual features of the virtual object, including features that the virtual object may observe; e.g., shape, volume, etc. In this embodiment, a process of classifying and compressing a virtual object will be specifically described by taking a shape as an example.
In order to improve the effectiveness and accuracy of the obtained visual features of the virtual object, the shape recognition is performed on the virtual object based on the shape recognition model obtained through pre-training, and in an optional implementation manner provided in this embodiment, the visual features of the virtual object are recognized by the following manner:
Inputting the object data into a shape recognition model to perform shape recognition to obtain an object shape of the virtual object;
the shape recognition model is obtained based on training of a labeling object data sample carrying shape labeling.
Specifically, the shape of the virtual object is identified based on the object data and the shape identification model, and the object shape of the virtual object is obtained. It should be noted that, in this embodiment, the data or the sample is the data in the virtual world.
In practical application, training of the shape recognition model can be completed in advance, for example, model training of the shape recognition model is performed on a cloud server, and in order to improve recognition accuracy of the shape recognition model, the shape recognition model is obtained based on training of a labeling object data sample carrying shape labeling. In an optional implementation manner provided in this embodiment, the labeling object data sample is determined by adopting the following manner:
extracting object data from each virtual data set sample to obtain an object data sample set;
Inputting each object data sample in the object data sample set into a feature encoder for feature encoding to obtain object features corresponding to each object data sample;
performing shape clustering processing on the object data sample set based on the object features to obtain a plurality of shape types and type sample sets under the shape types;
And marking the shape type of each object data sample in the type sample set to obtain the labeling object data sample.
In this embodiment, in order to improve accuracy and effectiveness of feature encoding encoded by a feature encoder, the feature encoder is trained, and in the process of training the feature encoder, the feature encoder is obtained through a training data reconstruction network; specifically, in the training process, the data reconstruction network uses a three-dimensional CNN (Convolutional Neural Network ) as a backbone network; the network comprises two parts, a first part being a feature encoder and a second part being a decoder; in the process of performing network training, inputting each object sample data in an object sample data set into a feature encoder to obtain object features output by the feature encoder, inputting the output object features into a decoder to perform data reconstruction to obtain reconstructed object data; and training the data reconstruction network by taking Euclidean distance between the object data sample and the corresponding reconstruction object data as a loss function until the network converges. And obtaining the feature encoder of the trained data reconstruction network. The training of the data reconstruction network may also be performed in advance, for example, training the data reconstruction network on a cloud server.
The labeling object data sample is determined through unsupervised clustering. Specifically, in the unsupervised clustering process, since shape classification cannot be accurately performed from the object data samples, after the object data is extracted from each virtual data set sample to obtain an object data sample set, feature recognition is performed on each object data sample in the object data sample set to obtain object features corresponding to each object data sample, and clustering is performed on the object data samples in the object data sample set based on the object features of each object data sample to obtain a plurality of shape types and type sample sets under each shape type. Optionally, the performing shape clustering processing on the object data sample set based on the object features includes: and based on the object characteristics, carrying out shape clustering processing on the object data sample set by using a clustering algorithm. The clustering algorithm comprises a K-means clustering algorithm.
After obtaining a plurality of shape types and type sample sets under each shape type, in order to avoid deviation of clustering of object data samples in each type sample set caused by shape clustering processing, the type sample sets under each shape type can be updated; wherein the updating includes, but is not limited to, merging of type sample sets of similar types, transferring of object data samples in the type sample sets, and deleting of noise data in the type sample sets. In an optional implementation manner provided in this embodiment, the update of the type sample set is implemented in the following manner:
according to a merging instruction for at least two shape types in the plurality of shape types, merging the type sample sets under the at least two shape types;
Or alternatively
Transferring the target object sample from a type sample set under any shape type to a type sample set under a target shape type according to a shape type switching instruction for the target object data sample under any shape type;
Or alternatively
And deleting any object data sample from the type sample set under any shape type according to a deleting instruction of any object data sample under any shape type.
Specifically, after a plurality of shape types and type sample sets under each shape type are obtained, updating the type sample sets, and performing shape labeling on object data samples in the corresponding type sample sets based on the updated shape types to obtain labeled object data samples.
After the labeling object data sample is obtained, a shape recognition model is trained based on the labeling object data sample, so that shape recognition is performed on the object data based on the shape recognition model. In training the shape recognition model, model training is performed based on the multi-class ResNet structure and the multi-class softmax penalty function until the model converges. Wherein the input is object data and the output is object shape.
In the process of shape recognition, extracting key data representing an external area of a virtual object in object data, constructing external features of the virtual object based on the key data, and determining an object shape of the virtual object based on the external features. For example, key data of an external region of a virtual object in object data is extracted, external features of the virtual object are drawn based on the key data, and an object shape corresponding to the external features is determined as an object shape of the virtual object.
And S106, reading a compression model corresponding to the visual characteristic, and inputting the object data into the compression model for data compression processing to obtain compressed data.
In the above steps, firstly, object data extraction is performed on a virtual data set corresponding to a virtual object in a virtual world to obtain object data of the virtual object, and then visual feature recognition is performed on the virtual object based on the object data to obtain visual features of the virtual object; on the basis, the compression model corresponding to the visual characteristic is read, and the object data is input into the compression model for data compression processing to obtain compressed data. Optionally, the reading the compression model corresponding to the visual feature includes: reading a compression model corresponding to the visual characteristic from a compression model set; the compression model set comprises compression models corresponding to all visual features; the compression model corresponding to the visual feature is composed of a compression encoder and a decoder corresponding to the visual feature.
In this embodiment, by designing a compression model set with multiple network and multiple tasks, object data of virtual objects with different visual characteristics are compressed by the compression model set, so as to ensure a data compression effect and compressed data quality. For each visual feature, a compression model is trained. In practical applications, training of each compression model in the compression model set may be performed in advance, for example, model training of the compression model set is performed on a cloud server,
In an optional implementation manner provided in this embodiment, taking a visual feature corresponding to a virtual object as an example, a training process of a compression model is specifically described, and specifically, the compression model corresponding to the visual feature of the virtual object is obtained by training in the following manner:
Inputting each object data sample in the type sample set under the visual characteristics into a compression encoder in a model to be trained to perform visual characteristic recognition, and outputting compressed samples of each object data sample;
Inputting the compressed samples into a decoder in the model to be trained to reconstruct data, and outputting reconstructed data of each object data sample;
and calculating training loss based on the reconstruction data and the object data sample, carrying out parameter adjustment on the model to be trained based on the training loss, and obtaining the compression model after training is completed.
The model to be trained, such as a UNET model of structure; comprising two parts, a compression encoder and a decoder. The input of the compression encoder is the object data, the output is the compressed encoded data, the input of the decoder is the compressed encoded data, and the output is the compressed data obtained by reconstructing the compressed encoded data.
After training the compression models corresponding to the visual features in the above manner, in order to reduce the number of compression models in the compression model set under the condition of guaranteeing the compression quality, in this embodiment, the compression model set is determined by adopting the following manner:
Model training is carried out based on the type sample set under each visual feature, and a compression model corresponding to each visual feature is obtained;
Calculating gradient correlation of the compression model corresponding to each visual feature;
if at least two compression models with gradient correlation larger than a preset threshold exist, merging type sample sets under the visual characteristics corresponding to the at least two compression models;
Model training is carried out based on the merging type sample set obtained through merging processing, and a compression model corresponding to the updated visual characteristics is obtained.
Further, in an optional implementation manner provided in this embodiment, in a process of performing model training based on a merging type sample set obtained by merging processing to obtain a compression model corresponding to an updated visual feature, training a compression encoder based on an object data sample in the merging type sample set, and training a decoder corresponding to each visual feature based on an object data sample corresponding to each visual feature in the merging type sample set, to obtain a compression encoder and at least two decoders.
Specifically, after training to obtain compression models corresponding to all visual features, in order to reduce the number of compression encoders in a compression model set, the same batch of data can be type sample sets under all visual features, gradient correlation of the compression models corresponding to all visual features is calculated, type sample sets under the visual features corresponding to the compression models with gradient correlation larger than a preset threshold are combined, the combined type sample sets share one compression encoder, and decoders are mutually independent according to the visual features.
In the process of model training based on the merging type sample set obtained by the merging process, the same compression encoder is trained based on the merging type sample set, and decoders of all visual features are trained based on object data samples of different visual features in the merging type sample set. For example, the gradient correlation of the compression model corresponding to the shape of the host computer and the compression model corresponding to the shape of the printer is greater than 95; or the gradient correlation of the compression model corresponding to the shape of the 1000 ml beverage bottle and the compression model corresponding to the shape of the 500ml beverage bottle is greater than 95.
Taking an example that the gradient correlation of the compression model corresponding to the first visual gradient and the compression model corresponding to the second visual gradient is larger than a preset threshold, a process of model training based on a merging type sample set is specifically described:
Training a compression encoder and a first decoder based on a first object data sample in the merged type sample set, and training the compression encoder and a second decoder based on a second object data sample in the merged type sample set;
Acquiring a compression encoder corresponding to the first visual feature and the second visual feature, a first decoder corresponding to the first visual feature and a second decoder corresponding to the second visual feature, which are obtained through training;
The first object data sample is an object data sample under the first visual characteristic, and the second object data sample is an object data sample under the second visual characteristic.
When the method is implemented, after the visual characteristics of the virtual object are obtained, the compression model corresponding to the visual characteristics is read, and the object data are input into the compression model for data compression processing to obtain compressed data; since the compression model set includes a plurality of compression encoders and a plurality of decoders, the number of compression encoders and decoders is not necessarily the same, in the process of performing compression processing on the object data of the virtual object, the compression model corresponding to the visual feature needs to be read first.
Reading a compression encoder and a decoder corresponding to the visual features from a compression model set;
and constructing a compression model corresponding to the visual characteristic based on the read compression encoder and decoder.
Specifically, a compression encoder and a decoder corresponding to the visual features are read from a compression model set, a compression model formed by the read compression encoder and the read compression decoder is obtained, and the formed compression model is used as a compression model corresponding to the visual features.
Further, after the compression model corresponding to the visual feature is read, data compression processing is performed on the object data based on the compression model. In an optional implementation manner provided in this embodiment, inputting the object data into the compression model to perform data compression processing to obtain compressed data includes:
Inputting the object data into the compression encoder for compression encoding to obtain encoded compressed data output by the compression encoder;
and inputting the encoded compressed data into the decoder for data decoding to obtain the compressed data.
Specifically, in the process of inputting object data into a compression model for data compression processing, inputting the object data into a read compression encoder for compression encoding to obtain encoded compressed data, and inputting the encoded compressed data into a read decoder for data decoding to obtain compressed data; wherein the compressed data is data of the object data after compression.
In addition, in step S106, the compressed data obtained by reading the compressed model corresponding to the visual feature and inputting the object data into the compressed model for data compression may be replaced by the compressed data obtained by inputting the object data into the compressed model corresponding to the visual feature for data compression, and the new implementation is formed by the compressed data and other processing steps provided in this embodiment.
In summary, in the data compression processing method provided in this embodiment, in the process of performing data compression on the virtual data set corresponding to the virtual object in the virtual world, in order to improve accuracy of the virtual object after compression, to avoid poor data quality after compression caused by compressing the data in the background area, first, object data extraction is performed on the virtual data set corresponding to the virtual object to obtain object data of the virtual object, further, in order to improve compression effect, to avoid poor compression effect caused by performing data compression on the object data of the virtual object with different shapes in the same manner, after the object data of the virtual object is obtained, shape recognition is performed on the virtual object based on the virtual data to obtain the shape of the virtual object, and on this basis, the object data is input into the compression model corresponding to the shape of the virtual object to obtain the compressed data of the virtual object, so that by the compression mode of classifying and compressing according to the shape, the data instruction of the compressed data after compression is improved.
The following further describes the data compression processing method provided in this embodiment by taking an application of the data compression processing method provided in this embodiment to a virtual compression scene as an example, and referring to fig. 2, the data compression processing method applied to the virtual compression scene specifically includes the following steps.
Step S202, a virtual data set corresponding to a virtual object in a virtual world is obtained.
Step S204, screening out object data in the virtual data set based on the foreground and background classifier.
Specifically, the virtual data set is input into a foreground and background classifier to carry out foreground and background classification, and object data output by the foreground and background classifier is obtained. The foreground and background classifier can also be a foreground and background recognition model.
In step S206, the object data is input into the shape recognition model to perform shape recognition, so as to obtain the object shape of the virtual object.
Step S208, the compression encoder and decoder corresponding to the object shape are read.
In step S210, the target data is input to a compression encoder for compression encoding, and encoded compressed data is obtained.
Step S212, inputting the encoded compressed data into a decoder for data decoding to obtain the compressed data of the virtual object.
In addition, the steps S208 to S212 may be replaced by inputting the object data into the compression model corresponding to the object shape to perform data compression processing, so as to obtain the compressed data of the virtual object; and forms a new implementation with the other processing steps provided in this embodiment.
An embodiment of a data compression processing apparatus provided in the present specification is as follows:
In the above-described embodiments, a data compression processing method is provided, and a data compression processing apparatus is provided corresponding thereto, and is described below with reference to the accompanying drawings.
Referring to fig. 3, a schematic diagram of a data compression processing apparatus according to the present embodiment is shown.
Since the apparatus embodiments correspond to the method embodiments, the description is relatively simple, and the relevant portions should be referred to the corresponding descriptions of the method embodiments provided above. The device embodiments described below are merely illustrative.
The present embodiment provides a data compression processing apparatus including:
The data extraction module 302 is configured to input a virtual data set corresponding to a virtual object in a virtual world into an extraction model to extract object data, and obtain object data of the virtual object;
A visual feature recognition module 304 configured to perform visual feature recognition on the virtual object based on the object data, to obtain visual features of the virtual object;
the data compression module 306 is configured to read the compression model corresponding to the visual feature, and input the object data into the compression model for data compression processing to obtain compressed data.
An embodiment of a data compression processing apparatus provided in the present specification is as follows:
In correspondence to the above-described data compression processing method, one or more embodiments of the present disclosure further provide a data compression processing apparatus for performing the above-provided data compression processing method, and fig. 4 is a schematic structural diagram of the data compression processing apparatus provided by the one or more embodiments of the present disclosure, based on the same technical concept.
The data compression processing device provided in this embodiment includes:
As shown in fig. 4, the data compression processing apparatus may have a relatively large difference due to different configurations or performances, and may include one or more processors 401 and a memory 402, where the memory 402 may store one or more storage applications or data. Wherein the memory 402 may be transient storage or persistent storage. The application programs stored in the memory 402 may include one or more modules (not shown), each of which may include a series of computer executable instructions in the data compression processing apparatus. Still further, the processor 401 may be arranged to communicate with the memory 402 and execute a series of computer executable instructions in the memory 402 on the data compression processing apparatus. The data compression processing apparatus may also include one or more power supplies 403, one or more wired or wireless network interfaces 404, one or more input/output interfaces 405, one or more keyboards 406, and the like.
In a particular embodiment, a data compression processing apparatus includes a memory, and one or more programs, wherein the one or more programs are stored in the memory, and the one or more programs may include one or more modules, and each module may include a series of computer-executable instructions for the data compression processing apparatus, and the execution of the one or more programs by the one or more processors comprises computer-executable instructions for:
inputting a virtual data set corresponding to a virtual object in a virtual world into an extraction model to extract object data, and obtaining object data of the virtual object;
performing visual feature recognition on the virtual object based on the object data to obtain visual features of the virtual object;
and reading the compression model corresponding to the visual characteristic, and inputting the object data into the compression model for data compression processing to obtain compressed data.
An embodiment of a storage medium provided in the present specification is as follows:
in correspondence to the above-described data compression processing method, one or more embodiments of the present disclosure further provide a storage medium based on the same technical concept.
The storage medium provided in this embodiment is configured to store computer executable instructions that, when executed by a processor, implement the following flow:
inputting a virtual data set corresponding to a virtual object in a virtual world into an extraction model to extract object data, and obtaining object data of the virtual object;
performing visual feature recognition on the virtual object based on the object data to obtain visual features of the virtual object;
and reading the compression model corresponding to the visual characteristic, and inputting the object data into the compression model for data compression processing to obtain compressed data.
It should be noted that, the embodiments related to the storage medium in the present specification and the embodiments related to the data compression processing method in the present specification are based on the same inventive concept, so that the specific implementation of this embodiment may refer to the implementation of the foregoing corresponding method, and the repetition is not repeated.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In the 30 s of the 20 th century, improvements to one technology could clearly be distinguished as improvements in hardware (e.g., improvements to circuit structures such as diodes, transistors, switches, etc.) or software (improvements to the process flow). However, with the development of technology, many improvements of the current method flows can be regarded as direct improvements of hardware circuit structures. Designers almost always obtain corresponding hardware circuit structures by programming improved method flows into hardware circuits. Therefore, an improvement of a method flow cannot be said to be realized by a hardware entity module. For example, a programmable logic device (Programmable Logic Device, PLD) (e.g., field programmable gate array (Field Programmable GATE ARRAY, FPGA)) is an integrated circuit whose logic functions are determined by user programming of the device. A designer programs to "integrate" a digital system onto a PLD without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Moreover, nowadays, instead of manually manufacturing integrated circuit chips, such programming is mostly implemented with "logic compiler (logic compiler)" software, which is similar to the software compiler used in program development and writing, and the original code before being compiled is also written in a specific programming language, which is called hardware description language (Hardware Description Language, HDL), but HDL is not just one, but a plurality of kinds, such as ABEL(Advanced Boolean Expression Language)、AHDL(Altera Hardware Description Language)、Confluence、CUPL(Cornell University Programming Language)、HDCal、JHDL(Java Hardware Description Language)、Lava、Lola、MyHDL、PALASM、RHDL(Ruby Hardware Description Language), and VHDL (Very-High-SPEED INTEGRATED Circuit Hardware Description Language) and Verilog are currently most commonly used. It will also be apparent to those skilled in the art that a hardware circuit implementing the logic method flow can be readily obtained by merely slightly programming the method flow into an integrated circuit using several of the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer readable medium storing computer readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, application SPECIFIC INTEGRATED Circuits (ASICs), programmable logic controllers, and embedded microcontrollers, examples of controllers include, but are not limited to, the following microcontrollers: ARC625D, atmel AT91SAM, microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic of the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller in a pure computer readable program code, it is well possible to implement the same functionality by logically programming the method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers, etc. Such a controller may thus be regarded as a kind of hardware component, and means for performing various functions included therein may also be regarded as structures within the hardware component. Or even means for achieving the various functions may be regarded as either software modules implementing the methods or structures within hardware components.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. One typical implementation is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being functionally divided into various units, respectively. Of course, the functions of each unit may be implemented in the same piece or pieces of software and/or hardware when implementing the embodiments of the present specification.
One skilled in the relevant art will recognize that one or more embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, one or more embodiments of the present description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present description can take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The present description is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the specification. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
One or more embodiments of the present specification may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. One or more embodiments of the specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.
The foregoing description is by way of example only and is not intended to limit the present disclosure. Various modifications and changes may occur to those skilled in the art. Any modifications, equivalent substitutions, improvements, etc. that fall within the spirit and principles of the present document are intended to be included within the scope of the claims of the present document.

Claims (13)

1. A data compression processing method, comprising:
inputting a virtual data set corresponding to a virtual object in a virtual world into an extraction model to extract object data, and obtaining object data of the virtual object;
Performing visual feature recognition on the virtual object based on the object data to obtain visual features of the virtual object, wherein the visual features comprise the shape of the virtual object;
Reading a compression model corresponding to the visual characteristics, and inputting the object data into the compression model for data compression processing to obtain compressed data;
The compression model corresponding to the visual feature is read from a compression model set comprising compression models corresponding to the visual feature, and each compression model in the compression model set is determined by adopting the following mode:
Model training is carried out based on the type sample set under each visual feature, and a compression model corresponding to each visual feature is obtained;
Calculating gradient correlation of the compression model corresponding to each visual feature;
if at least two compression models with gradient correlation larger than a preset threshold exist, merging type sample sets under the visual characteristics corresponding to the at least two compression models;
model training is carried out based on the merging type sample set obtained by merging processing, and a compression model corresponding to the updated visual characteristics is obtained;
the obtaining the compression model corresponding to each visual characteristic comprises the following steps:
Inputting each object data sample in the type sample set under the visual characteristics into a compression encoder in a model to be trained to perform visual characteristic recognition, and outputting compressed samples of each object data sample;
Inputting the compressed samples into a decoder in the model to be trained to reconstruct data, and outputting reconstructed data of each object data sample;
and calculating training loss based on the reconstruction data and the object data sample, carrying out parameter adjustment on the model to be trained based on the training loss, and obtaining the compression model after training is completed.
2. The data compression processing method according to claim 1, the performing visual feature recognition on the virtual object based on the object data, obtaining the visual feature of the virtual object, comprising:
inputting the object data into a shape recognition model to perform shape recognition to obtain the shape of the virtual object;
the shape recognition model is obtained based on training of a labeling object data sample carrying shape labeling.
3. The data compression processing method according to claim 2, wherein the labeling object data sample is determined by:
extracting object data from each virtual data set sample to obtain an object data sample set;
Inputting each object data sample in the object data sample set into a feature encoder for feature encoding to obtain object features corresponding to each object data sample;
performing shape clustering processing on the object data sample set based on the object features to obtain a plurality of shape types and type sample sets under the shape types;
And marking the shape type of each object data sample in the type sample set to obtain the labeling object data sample.
4. The data compression processing method according to claim 3, wherein after the performing of the shape clustering processing on the object data sample set based on the object features to obtain a plurality of shape types and a type sample set under each shape type, and before the performing of the shape type marking on each object data sample in the type sample set to obtain the labeling object data sample, further comprising:
according to a merging instruction for at least two shape types in the plurality of shape types, merging the type sample sets under the at least two shape types;
And/or the number of the groups of groups,
Transferring the target object data sample from a type sample set under any shape type to a type sample set under a target shape type according to a shape type switching instruction for the target object data sample under any shape type;
And/or the number of the groups of groups,
And deleting any object data sample from the type sample set under any shape type according to a deleting instruction of any object data sample under any shape type.
5. The data compression processing method according to claim 1, wherein the compression model corresponding to the visual feature is composed of a compression encoder corresponding to the visual feature and a decoder corresponding to the visual feature.
6. The data compression processing method according to claim 1, wherein the model training is performed based on the merging type sample set obtained by the merging process, and the compression model corresponding to the updated visual feature is obtained, including:
training a compression encoder based on the object data samples in the merging type sample set, and training decoders corresponding to each visual feature based on the object data samples corresponding to each visual feature in the merging type sample set to obtain a compression encoder and at least two decoders;
based on the obtained encoder and at least two decoders, a compression model corresponding to the updated visual features is obtained.
7. The data compression processing method according to claim 5, wherein the reading the compression model corresponding to the visual feature includes:
Reading a compression encoder corresponding to the visual feature and a decoder corresponding to the visual feature from a compression model set;
and constructing a compression model corresponding to the visual feature based on the read compression encoder corresponding to the visual feature and the read decoder corresponding to the visual feature.
8. The data compression processing method according to claim 7, the inputting the object data into the compression model for data compression processing to obtain compressed data, comprising:
Inputting the object data into the compression encoder for compression encoding to obtain encoded compressed data output by the compression encoder;
and inputting the encoded compressed data into the decoder for data decoding to obtain the compressed data.
9. A data compression processing method according to claim 3, said performing shape clustering processing on said object data sample set based on said object features, comprising:
and based on the object characteristics, carrying out shape clustering processing on the object data sample set by using a clustering algorithm.
10. The data compression processing method according to claim 1, wherein the virtual world is subjected to a decentralization transaction by generating a non-homogeneous identification, and the ownership of the virtual asset is occupied by the transaction;
Wherein the virtual object comprises an object that performs a de-centralized transaction in the virtual world and configures a non-homogenous identification.
11. A data compression processing apparatus comprising:
The data extraction module is configured to input a virtual data set corresponding to a virtual object in a virtual world into the extraction model to extract object data, and obtain object data of the virtual object;
A visual feature recognition module configured to perform visual feature recognition on the virtual object based on the object data, to obtain visual features of the virtual object, the visual features including a shape of the virtual object;
The data compression module is configured to read a compression model corresponding to the visual characteristic, input the object data into the compression model and perform data compression processing to obtain compressed data;
The compression model corresponding to the visual feature is read from a compression model set comprising compression models corresponding to the visual feature, and each compression model in the compression model set is determined by adopting the following mode:
Model training is carried out based on the type sample set under each visual feature, and a compression model corresponding to each visual feature is obtained;
Calculating gradient correlation of the compression model corresponding to each visual feature;
if at least two compression models with gradient correlation larger than a preset threshold exist, merging type sample sets under the visual characteristics corresponding to the at least two compression models;
model training is carried out based on the merging type sample set obtained by merging processing, and a compression model corresponding to the updated visual characteristics is obtained;
the obtaining the compression model corresponding to each visual characteristic comprises the following steps:
Inputting each object data sample in the type sample set under the visual characteristics into a compression encoder in a model to be trained to perform visual characteristic recognition, and outputting compressed samples of each object data sample;
Inputting the compressed samples into a decoder in the model to be trained to reconstruct data, and outputting reconstructed data of each object data sample;
and calculating training loss based on the reconstruction data and the object data sample, carrying out parameter adjustment on the model to be trained based on the training loss, and obtaining the compression model after training is completed.
12. A data compression processing apparatus comprising:
a processor; and
A memory configured to store computer-executable instructions that, when executed, cause the processor to:
inputting a virtual data set corresponding to a virtual object in a virtual world into an extraction model to extract object data, and obtaining object data of the virtual object;
Performing visual feature recognition on the virtual object based on the object data to obtain visual features of the virtual object, wherein the visual features comprise the shape of the virtual object;
Reading a compression model corresponding to the visual characteristics, and inputting the object data into the compression model for data compression processing to obtain compressed data;
The compression model corresponding to the visual feature is read from a compression model set comprising compression models corresponding to the visual feature, and each compression model in the compression model set is determined by adopting the following mode:
Model training is carried out based on the type sample set under each visual feature, and a compression model corresponding to each visual feature is obtained;
Calculating gradient correlation of the compression model corresponding to each visual feature;
if at least two compression models with gradient correlation larger than a preset threshold exist, merging type sample sets under the visual characteristics corresponding to the at least two compression models;
model training is carried out based on the merging type sample set obtained by merging processing, and a compression model corresponding to the updated visual characteristics is obtained;
the obtaining the compression model corresponding to each visual characteristic comprises the following steps:
Inputting each object data sample in the type sample set under the visual characteristics into a compression encoder in a model to be trained to perform visual characteristic recognition, and outputting compressed samples of each object data sample;
Inputting the compressed samples into a decoder in the model to be trained to reconstruct data, and outputting reconstructed data of each object data sample;
and calculating training loss based on the reconstruction data and the object data sample, carrying out parameter adjustment on the model to be trained based on the training loss, and obtaining the compression model after training is completed.
13. A storage medium storing computer-executable instructions that when executed by a processor implement the following:
inputting a virtual data set corresponding to a virtual object in a virtual world into an extraction model to extract object data, and obtaining object data of the virtual object;
Performing visual feature recognition on the virtual object based on the object data to obtain visual features of the virtual object, wherein the visual features comprise the shape of the virtual object;
Reading a compression model corresponding to the visual characteristics, and inputting the object data into the compression model for data compression processing to obtain compressed data;
The compression model corresponding to the visual feature is read from a compression model set comprising compression models corresponding to the visual feature, and each compression model in the compression model set is determined by adopting the following mode:
Model training is carried out based on the type sample set under each visual feature, and a compression model corresponding to each visual feature is obtained;
Calculating gradient correlation of the compression model corresponding to each visual feature;
if at least two compression models with gradient correlation larger than a preset threshold exist, merging type sample sets under the visual characteristics corresponding to the at least two compression models;
model training is carried out based on the merging type sample set obtained by merging processing, and a compression model corresponding to the updated visual characteristics is obtained;
the obtaining the compression model corresponding to each visual characteristic comprises the following steps:
Inputting each object data sample in the type sample set under the visual characteristics into a compression encoder in a model to be trained to perform visual characteristic recognition, and outputting compressed samples of each object data sample;
Inputting the compressed samples into a decoder in the model to be trained to reconstruct data, and outputting reconstructed data of each object data sample;
and calculating training loss based on the reconstruction data and the object data sample, carrying out parameter adjustment on the model to be trained based on the training loss, and obtaining the compression model after training is completed.
CN202211148120.7A 2022-09-20 2022-09-20 Data compression processing method and device Active CN115499635B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211148120.7A CN115499635B (en) 2022-09-20 2022-09-20 Data compression processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211148120.7A CN115499635B (en) 2022-09-20 2022-09-20 Data compression processing method and device

Publications (2)

Publication Number Publication Date
CN115499635A CN115499635A (en) 2022-12-20
CN115499635B true CN115499635B (en) 2024-05-10

Family

ID=84470776

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211148120.7A Active CN115499635B (en) 2022-09-20 2022-09-20 Data compression processing method and device

Country Status (1)

Country Link
CN (1) CN115499635B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115953559B (en) * 2023-01-09 2024-04-12 支付宝(杭州)信息技术有限公司 Virtual object processing method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003046999A (en) * 2001-07-30 2003-02-14 Toshiba Corp Image monitoring system, monitored image distributing method therefor and camera therefor using network
CN1629888A (en) * 2003-12-17 2005-06-22 中国科学院自动化研究所 A Method for Skeletal Object Reconstruction
JP2006185354A (en) * 2004-12-28 2006-07-13 Nikon Corp Residual capacity management device, compression function-equipped memory card and external equipment

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4424845B2 (en) * 1999-12-20 2010-03-03 本田 正 Image data compression method and decompression method
GB201003962D0 (en) * 2010-03-10 2010-04-21 Tangentix Ltd Multimedia content delivery system
EP3235248A4 (en) * 2014-12-15 2018-07-11 Miovision Technologies Incorporated System and method for compressing video data
CN108781281B (en) * 2016-02-26 2021-09-28 港大科桥有限公司 Shape adaptive model based codec for lossy and lossless image compression
GB201717011D0 (en) * 2017-10-17 2017-11-29 Nokia Technologies Oy An apparatus a method and a computer program for volumetric video
CN112534427B (en) * 2018-08-07 2025-04-08 昕诺飞控股有限公司 System and method for compressing sensor data using clustering and shape matching in edge nodes of a distributed computing network
US10872463B2 (en) * 2019-04-01 2020-12-22 Microsoft Technology Licensing, Llc Depth-compressed representation for 3D virtual scene

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003046999A (en) * 2001-07-30 2003-02-14 Toshiba Corp Image monitoring system, monitored image distributing method therefor and camera therefor using network
CN1629888A (en) * 2003-12-17 2005-06-22 中国科学院自动化研究所 A Method for Skeletal Object Reconstruction
JP2006185354A (en) * 2004-12-28 2006-07-13 Nikon Corp Residual capacity management device, compression function-equipped memory card and external equipment

Also Published As

Publication number Publication date
CN115499635A (en) 2022-12-20

Similar Documents

Publication Publication Date Title
KR102734310B1 (en) Method and device for compressing/decompressing neural network models
CN115359219B (en) Virtual world virtual image processing method and device
CN114973049B (en) Lightweight video classification method with unified convolution and self-attention
CN114238904B (en) Identity recognition method, and training method and device of dual-channel hyper-resolution model
CN112308113A (en) Target identification method, device and medium based on semi-supervision
CN115499635B (en) Data compression processing method and device
CN115359220B (en) Method and device for updating virtual image of virtual world
CN117456028A (en) Method and device for generating image based on text
CN115600157B (en) Data processing method and device, storage medium and electronic equipment
CN115374298B (en) Index-based virtual image data processing method and device
CN115358777B (en) Method and device for processing advertisement delivery in virtual world
CN110390015B (en) Data information processing method, device and system
CN115393022B (en) Cross-domain recommendation processing method and device
CN115809696B (en) Virtual image model training method and device
CN117541963A (en) Method and device for extracting key video frames containing text risks
CN116863484A (en) Character recognition method, device, storage medium and electronic equipment
CN115953559B (en) Virtual object processing method and device
KR20230168258A (en) Image processing methods and devices, computer devices, storage media, and program products
CN115953706B (en) Virtual image processing method and device
CN115731375B (en) Method and device for updating virtual image
CN118522018B (en) Document image processing method and device
CN117808976B (en) A three-dimensional model construction method, device, storage medium and electronic equipment
CN116188731A (en) Virtual image adjusting method and device of virtual world
CN118862176B (en) Desensitization model training method, image desensitization method and device
CN113221871B (en) Character recognition method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant