CN119135959B - Communication method and platform for ultra-high definition video E-band transmission - Google Patents
Communication method and platform for ultra-high definition video E-band transmission Download PDFInfo
- Publication number
- CN119135959B CN119135959B CN202411606334.3A CN202411606334A CN119135959B CN 119135959 B CN119135959 B CN 119135959B CN 202411606334 A CN202411606334 A CN 202411606334A CN 119135959 B CN119135959 B CN 119135959B
- Authority
- CN
- China
- Prior art keywords
- transmission
- content
- key
- frame
- compensation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/24—Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth, upstream requests
- H04N21/2407—Monitoring of transmitted content, e.g. distribution time, number of downloads
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/2343—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
- H04N21/234381—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by altering the temporal resolution, e.g. decreasing the frame rate by frame skipping
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/436—Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home
- H04N21/4363—Adapting the video stream to a specific local network, e.g. a Bluetooth® network
- H04N21/43632—Adapting the video stream to a specific local network, e.g. a Bluetooth® network involving a wired protocol, e.g. IEEE 1394
- H04N21/43635—HDMI
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/70—Reducing energy consumption in communication networks in wireless communication networks
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
Abstract
The invention discloses a communication method and a platform for E-band transmission of ultra-high definition video, and belongs to the field of video transmission, wherein the method comprises the steps of monitoring environmental characteristic data, obtaining a transmission influence analysis result, and obtaining an influence time axis and obtaining a plurality of influence picture contents when the transmission influence analysis result is larger than a preset transmission influence threshold value; the method comprises the steps of obtaining key content compensation proportion and key frame compensation proportion according to a plurality of influence picture contents, carrying out compensation generation when transmission is carried out in an influence time axis to obtain a generated frame and generated content, obtaining quality loss identification parameters, identifying the generated frame and the generated content, and adding transmission data for transmission display. The technical problems of video transmission quality reduction and key frame and key content loss caused by the influence of transmission environment in the prior art when the ultra-high definition video is transmitted in the E band are solved, and the technical effects of improving the video transmission quality and reducing the key frame and key content loss when the ultra-high definition video is transmitted in the E band are achieved.
Description
Technical Field
The invention relates to the field of video transmission, in particular to a communication method and a platform for ultra-high definition video E-band transmission.
Background
At present, the E-band transmission of ultra-high definition video is easily affected by the transmission environment, so that the video transmission quality is reduced, and key frames and key contents are lost. In particular, E-band transmission is sensitive to environmental conditions, and environmental factors such as humidity, temperature, dust, rainfall, etc. may interfere with signal transmission, thereby affecting the quality of video transmission. When the transmission quality is reduced to a certain extent, video content, especially key frames and key content therein, can be lost, so that the video quality of a receiving end is seriously damaged, and the viewing experience of a user is influenced.
Disclosure of Invention
The application provides a communication method and a platform for E-band transmission of ultra-high definition video, and aims to solve the technical problems of video transmission quality degradation and key frame and key content loss caused by the influence of transmission environment when the ultra-high definition video is transmitted in the E-band in the prior art.
In view of the above problems, the present application provides a communication method and platform for E-band transmission of ultra-high definition video.
The communication method for ultra-high definition video E-band transmission comprises the steps of monitoring and collecting environmental characteristic data in a transmission environment for ultra-high definition video transmission based on E-band transmission, conducting environmental characteristic data prediction and transmission influence analysis to obtain a transmission influence analysis result, obtaining an influence time axis of the transmission influence analysis result when the transmission influence analysis result is larger than a preset transmission influence threshold value, obtaining a plurality of influence picture contents in a plurality of influence frames affected by the transmission influence time axis, conducting key frame analysis and key content analysis according to the plurality of influence picture contents to obtain a plurality of key content duty ratios and key frame duty ratios, calculating to obtain an overall key content duty ratio, conducting decision to obtain a key content compensation ratio and a key frame compensation ratio according to the overall key content duty ratio and the key frame duty ratio, monitoring transmission data when the transmission is conducted in the influence time axis, obtaining a lost frame and a lost content according to the key content compensation ratio and the key frame compensation ratio, conducting compensation generation of the lost frame and the lost content, obtaining generation and framing content, conducting generation and framing identification, conducting transmission quality identification and transmission, combining the lost content duty ratio and the transmission quality identification and the key frame identification.
The application discloses another aspect, provides a communication platform for ultra-high definition video E-band transmission, which comprises a transmission influence analysis module, a transmission control module and a communication control module, wherein the transmission influence analysis module is used for monitoring and collecting environmental characteristic data in a transmission environment for ultra-high definition video transmission based on E-band transmission, and carrying out environmental characteristic data prediction and transmission influence analysis to obtain a transmission influence analysis result; the system comprises a transmission influence analysis module, a compensation proportion acquisition module, a quality loss identification module, a decision-making generation frame identification module and a transmission data display module, wherein the transmission influence analysis module is used for acquiring an influence time axis of a transmission influence analysis result when the transmission influence analysis result is larger than a preset transmission influence threshold value, and obtaining a plurality of influence frame contents in a plurality of influenced frames in an index manner in a transmission super-high definition video, the compensation proportion acquisition module is used for carrying out key frame analysis and key content analysis according to the plurality of influence frame contents to obtain a plurality of key content duty ratios and key frame duty ratios, calculating to obtain an overall key content duty ratio, and according to the overall key content duty ratio and the key frame duty ratio, deciding to obtain a key content compensation proportion and a key frame compensation proportion, and the compensation generation module is used for monitoring transmission data when the transmission is carried out in the influence time axis, obtaining lost frames and lost contents, carrying out compensation generation of the lost frames and the lost contents according to the key content compensation proportion and the key frame compensation proportion, obtaining generated frames and generated contents, and a quality loss identification module is used for carrying out identification by combining the overall key content duty ratio and the key frame duty ratio, decision-making a decision and adding transmission data to carry out transmission display.
One or more technical schemes provided by the application have at least the following technical effects or advantages:
The method comprises the steps of acquiring environmental characteristic data in a transmission environment by monitoring, predicting and analyzing the environmental characteristic data, obtaining a transmission influence analysis result, pre-judging the influence degree of the transmission environment on video transmission quality, providing a basis for subsequent compensation processing, when the transmission influence analysis result is larger than a preset transmission influence threshold, meaning that the transmission quality is influenced, acquiring an influence time axis, indexing a plurality of influenced frames and picture contents in a transmission video in the influence time axis, precisely positioning video fragments influenced by the transmission environment, providing a data basis for subsequent key content analysis, carrying out key frame analysis and key content analysis on a plurality of influence picture contents extracted by the cable, obtaining parameters such as key content occupation ratio, key frame occupation ratio and the like, judging the importance degree of the influenced video fragments according to the key content compensation ratio and the key frame compensation ratio, determining key frames and contents needing key protection and compensation, improving the pertinence of subsequent compensation, monitoring the transmission data in real time when the transmission is carried out in the influence time axis, acquiring lost frames and content compensation ratio and the content information, accurately positioning the video fragments influenced by the transmission environment, generating a transmission bandwidth by combining the key content compensation ratio and the key frame compensation ratio, generating a dynamic content compensation, generating a transmission bandwidth, and a transmission quality-damaged video quality compensation parameter by the transmission quality compensation factor, and a transmission bandwidth is completely lost by combining the prior art, and a transmission quality is generated by the transmission quality-damaged, and a transmission quality is completely lost, and a quality is completely lost by the transmission quality is compared by the transmission-damaged by the transmission parameters is generated by the transmission parameters is compared by the transmission parameters, the technical problems of video transmission quality reduction and key frame and key content loss caused by the influence of transmission environment are solved, and the technical effects of improving video transmission quality and reducing key frame and key content loss when ultra-high definition video is transmitted in E-band are achieved.
The foregoing description is only an overview of the present application, and is intended to be implemented in accordance with the teachings of the present application in order that the same may be more clearly understood and to make the same and other objects, features and advantages of the present application more readily apparent.
Drawings
Fig. 1 is a schematic flow chart of a communication method for E-band transmission of ultra-high definition video according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a communication platform for E-band transmission of ultra-high definition video according to an embodiment of the present application.
Reference numerals indicate that the transmission influence analysis module 11, the influence identification module 12, the compensation proportion acquisition module 13, the compensation generation module 14 and the quality loss identification module 15.
Detailed Description
The technical scheme provided by the application has the following overall thought:
The embodiment of the application provides a communication method and a platform for ultra-high definition video E-band transmission. Firstly, through real-time monitoring and trend prediction of a transmission environment, a transmission influence analysis result of the environment change on the transmission quality is judged, and when the influence exceeds a preset transmission influence threshold value, a targeted quality compensation mechanism is started. In the compensation process, an affected time axis of the affected transmission is positioned, video frames in the affected time axis are extracted, and video contents needing important protection are determined through key frame analysis and key content analysis, so that the proportion and strategy of the subsequent compensation are decided. And once the key frames or the key contents are monitored to be lost in the transmission process, triggering a compensation flow, and accurately and efficiently repairing the damaged video by utilizing parameters such as the key content duty ratio, the key frame duty ratio and the like. And then, quantifying the video restoration effect into a quality loss identification parameter, adding the quality loss identification parameter into the compensated video data, and intuitively reflecting the transmission damage degree to realize the improvement of video transmission quality and the reduction of key frames and key content loss when the ultra-high definition video is transmitted in an E band.
Having described the basic principles of the present application, various non-limiting embodiments of the present application will now be described in detail with reference to the accompanying drawings.
An embodiment of the present application provides a communication method for E-band transmission of ultra-high definition video, as shown in fig. 1, the method including:
And S1, monitoring and collecting environmental characteristic data in a transmission environment for ultra-high definition video transmission based on E-band transmission, and carrying out environmental characteristic data prediction and transmission influence analysis to obtain a transmission influence analysis result.
Specifically, first, a transmission environment for performing ultra-high definition video transmission based on E-band transmission is monitored and collected, and environmental characteristic data in the transmission environment is obtained. The environmental characteristic data include, but are not limited to, parameters of factors such as temperature, humidity, dust concentration and the like which have an influence on the E-band transmission quality. And then, predicting the environmental characteristic data and analyzing the transmission influence by using the acquired environmental characteristic data. Specifically, based on the environmental characteristic data collected by history, an environmental characteristic data prediction model is established by technical means such as machine learning, the change trend of the environmental characteristic data in a period of time in the future is predicted, and the influence of environmental change on the transmission quality is judged. Meanwhile, the influence degree of the environmental features on the transmission quality is judged by analyzing the association relation between the current and predicted environmental feature data and the E-band transmission quality, and a transmission influence analysis result is obtained.
The state and the change trend of the transmission environment are mastered in real time through the collection, the prediction and the transmission influence analysis of the environmental characteristic data, the influence of the environmental change on the E-band transmission quality is prejudged, a basis is provided for the subsequent targeted transmission optimization and quality control, and the timeliness and the pertinence of the transmission processing are improved, so that the problem of the transmission quality degradation caused by the environmental change is furthest reduced under the condition that the E-band transmission is easily influenced by the environment.
And S2, when the transmission influence analysis result is larger than a preset transmission influence threshold value, acquiring an influence time axis of the transmission influence analysis result, and indexing in the transmitted ultra-high definition video to acquire a plurality of influence picture contents in a plurality of influenced influence frames.
Specifically, firstly, according to an obtained transmission influence analysis result, whether the influence of the current transmission environment on the transmission quality of the E-band ultra-high definition video reaches a preset transmission influence threshold value is judged. The preset transmission influence threshold is set according to the actual application requirement and the transmission requirement, represents a critical point of environmental influence degree, and when the threshold is exceeded, the transmission quality is considered to be possibly obviously influenced, and further optimization measures are needed. And if the transmission influence analysis result shows that the current environmental influence degree exceeds the preset transmission influence threshold value, triggering a transmission quality protection mechanism. First, an influence time axis in the transmission influence analysis result is acquired, and a start-stop time range in which predicted environmental influence occurs is described. And accurately positioning the affected video clips in the ultra-high definition video stream which is being transmitted by utilizing the influence time axis. Next, the extraction of key frames and key content is performed on video clips within the impact timeline. And quickly positioning a plurality of video frames in the affected fragments to be the affected frames through frame index information of a video coding format. And further extracting the picture content of the influence frame, such as a human face, characters, a target object and the like, in each influence frame to obtain a plurality of influence picture contents.
By locating the video data influenced by the transmission environment, a foundation is laid for the subsequent targeted transmission quality optimization, and support is provided for minimizing the influence of environmental deterioration on the whole video transmission quality, so that the optimization process can intervene in advance, and the abrupt deterioration of the transmission quality is avoided.
And S3, carrying out key frame analysis and key content analysis according to the influence picture contents to obtain a plurality of key content duty ratios and key frame duty ratios, calculating to obtain an overall key content duty ratio, and deciding to obtain a key content compensation proportion and a key frame compensation proportion according to the overall key content duty ratio and the key frame duty ratio.
Specifically, key frame analysis and key content analysis are performed separately for each influence frame. In key frame analysis, the importance of the current influencing frame relative to the whole video sequence is evaluated. The key frames carry most of video semantic information in the video transmission process and serve as reference standards for subsequent inter-frame prediction and motion compensation. Once a key frame is lost or corrupted, it can lead to error propagation and impact the quality of the reconstruction of the subsequent series of frames. Thus, the proportion of the affected frames belonging to the key frames is represented by a key frame duty cycle indicator, the higher the proportion, the higher the criticality of the affected video segments. In key content analysis, the semantic importance of key content regions within an impact frame is assessed. Different key content areas, such as faces, texts, target objects and the like, have different degrees of influence on the expression and the look and feel quality of video content. For example, a face region typically carries more video semantics, the integrity of which has a greater impact on the look and feel of a person's picture. Thus, fine-grained importance assessment is performed on different types of critical content areas. The area occupation ratio of each key content area in the influence frame is represented by a key content occupation ratio index. The higher the duty cycle, the more important the key content representing that type.
After obtaining the key frame duty and key content duty of each affected frame, the criticality of the entire affected video segment is evaluated over a greater range. The weighted average of the key content duty ratios of all the influence frames is calculated to obtain the comprehensive whole key content duty ratio, and the influence degree of the transmission environment deterioration on the whole video segment semantic integrity is represented. The higher the duty cycle, the more critical semantic information the affected video segments contain. The two indexes of the key frame duty ratio and the overall key content duty ratio are used to determine the strategy and parameters of the subsequent transmission recovery processing. Specifically, the duty index is mapped to the corresponding key frame compensation proportion and key content compensation proportion by the set threshold interval and mapping function. The compensation proportion reflects the proportion of resources required to be input into the recovery processing, and the higher the proportion is, the more redundancy transmission, error correction coding and other measures are required to ensure the recovery quality of the key video data, and the transmission and calculation resources can be saved as much as possible while the recovery effect is ensured through the self-adaptive recovery strategy.
And S4, monitoring transmission data when the transmission is carried out in the influence time axis, obtaining lost frames and lost contents, and carrying out compensation generation of the lost frames and the lost contents according to the key content compensation proportion and the key frame compensation proportion to obtain generated frames and generated contents.
Specifically, first, transmission data of the E band is monitored in real time within a determined influence time axis. By analyzing and checking the transmission data, the data loss and error code condition caused by the environmental deterioration are found. Once the data loss is detected, it is quickly determined to which influencing frame the lost data belongs, and the lost specific content, i.e. part of the key content area in the key frame, is further located. Next, different strategies and methods are employed to recover the lost data based on the determined key frame compensation ratio and key content compensation ratio. For lost key frames, a video frame repair technology based on deep learning is adopted, and the lost key frames are intelligently reconstructed by analyzing the time-space correlation before and after the key frames and using successfully received frames as references. In the repairing process, the front and back frames of the lost frame are taken as input, a repairing result similar to the visual semantics of the original lost frame is synthesized through a generator network, and then quality evaluation and optimization are carried out through a discriminator network, so that a high-quality key frame repairing result is obtained. And dynamically adjusting the complexity and the generation scale of the frame repair model according to the key frame compensation proportion. The higher the key frame compensation proportion is, the deeper the network structure and the more reference frames are adopted by the model, so that the better repairing effect is obtained, and meanwhile, more computing resources are consumed.
For lost content, semantic repair techniques based on attention mechanisms are employed. Unlike frame repair, semantic repair focuses more on the semantic integrity of local regions, and targeted repair optimization is performed according to the content type of the lost content and surrounding context information. For example, for lost face regions, repair focuses on maintaining the continuity and identity consistency of face features, and for lost text regions, repair focuses on character recognition and reconstruction. For this purpose, a multi-tasking semantic repair network is constructed, dynamically focusing on different types of lost regions by means of a attentive mechanism, adaptively extracting semantic features from adjacent successfully received regions, and generating targeted repair results by means of corresponding decoder branches. Likewise, the parameter scale and the calculation amount of the semantic repair network can be dynamically adjusted according to the key content compensation proportion, so as to balance the repair quality and the resource efficiency. And obtaining the repair result of the lost key frame and the key content area, namely compensating the lost frame and the lost content, thereby obtaining the generated frame and the generated content.
By adopting a self-adaptive and content-aware transmission recovery mechanism, the space-time correlation and semantic features of video data are fully utilized, and the intelligent balance between the recovery quality and the resource efficiency is carried out by combining the deep learning technology, so that the anti-interference capability and the recovery performance of E-band transmission are improved. Even under a severe transmission environment, the integrity of the ultra-high definition video is guaranteed to the maximum extent through targeted compensation processing, and smooth and stable video is provided.
And S5, according to the loss proportion of the lost frames and the lost contents, combining the overall key content duty ratio and the key frame duty ratio, deciding to obtain quality loss identification parameters, identifying the generated frames and the generated contents, and adding transmission data for transmission display.
Specifically, first, a loss ratio is calculated from the detected lost frame and lost content, and the obtained influence picture contents in the influence frames, and the overall loss degree of video data during transmission is reflected, and the higher the loss ratio, the greater the influence of the transmission quality is indicated. And then, according to the calculated overall key content duty ratio and key frame duty ratio, obtaining a key proportion by weighted average, reflecting the duty ratio condition of the key content in the lost video data, and indicating that the higher the key proportion is, the greater the influence of the lost data on the video quality and semantic integrity is. And then, combining the loss proportion and the key proportion, obtaining a quality loss identification parameter through a preset mapping function or a lookup table, comprehensively considering the total amount and the key degree of the lost data, and reflecting the comprehensive influence degree of the transmission environment deterioration on the video quality. The quality loss identification parameter can be represented by a percentage value or a grade value, and is used as an visual index for measuring the transmission quality degradation. And then, carrying out quality identification on the generated frame and the generated content by the quality loss identification parameter, clearly informing the video segment that the video segment is subjected to transmission recovery processing, and indicating the quality loss degree before processing. The generated frames and the generated contents after the quality identification are fused with the video data of the original transmission to obtain the transmission output result of the ultra-high definition video, so that the video transmission quality is improved when the ultra-high definition video is improved, and the loss of key frames and key contents is reduced.
Further, the embodiment of the application further comprises:
The method comprises the steps of monitoring and collecting environmental characteristic data in a transmission environment for ultra-high definition video transmission based on E-band transmission, collecting a sample environmental characteristic data set based on historical environmental characteristic data monitoring records in the transmission environment, collecting environmental characteristic data after a preset time length, marking the environmental characteristic data set as a sample prediction environmental characteristic data set, adopting the sample environmental characteristic data set and the sample prediction environmental characteristic data set to train an environmental characteristic data predictor to predict the environmental characteristic data to obtain prediction environmental characteristic data, and carrying out transmission influence analysis according to the prediction environmental characteristic data to obtain a transmission influence analysis result.
In a feasible implementation mode, through learning and modeling of historical environment characteristic data, pre-judgment of future transmission environment and pre-estimation of transmission quality are achieved, and more accurate and timely decision basis is provided for subsequent transmission optimization.
Firstly, various sensors and detection instruments are deployed on transmission equipment to monitor the transmission environment of E-band ultra-high definition video transmission in real time, and various environmental characteristic parameters including meteorological parameters such as temperature, humidity, air pressure, wind speed, rainfall and the like, and transmission link related parameters such as geographic positions, topography, electromagnetic interference and the like are acquired. Meanwhile, a sample environment characteristic data set is obtained by targeted sampling from historical environment characteristic data monitoring records in a transmission environment. The set contains environment characteristic data of the transmission environment at different times and under different conditions, and covers various change modes and influence factors of the transmission environment. Meanwhile, for each sample environmental characteristic data, collecting actual environmental characteristic data of the sample environmental characteristic data after a preset time length (such as 1 hour in the future), and marking the actual environmental characteristic data as corresponding sample predicted environmental characteristic data to obtain a sample predicted environmental characteristic data set. Then, using the collected sample environmental feature data set and the sample prediction environmental feature data set, an environmental feature data predictor is constructed and trained. The environmental characteristic data predictor may employ various time series prediction models, such as a recurrent neural network, a long-short term memory network, and the like. The sample environmental characteristic data is input into the predictor, the sample predicted environmental characteristic data is used as a supervision signal, and the environmental characteristic data predictor learns the evolution rule and the change trend of the environmental characteristic data in the time dimension. The environmental characteristic data predictor obtained through training predicts the newly acquired environmental characteristic data to obtain predicted environmental characteristic data in a future period of time. And then, taking the predicted environmental characteristic data as input, carrying out transmission influence analysis, and evaluating the potential influence of the future transmission environment on the transmission quality. For example, the transmission influence analysis can be implemented by means of an expert knowledge base, an influence factor model and the like, for example, a mapping relation base between transmission environment characteristics and transmission quality indexes is established in advance, and according to predicted environment characteristic data, corresponding estimated quality indexes such as transmission error rate, signal to noise ratio and the like are obtained by table lookup, and used as a transmission influence analysis result to indicate the degree of transmission quality degradation possibly caused by a future transmission environment.
By means of the transmission environment pre-judging method based on machine learning, history monitoring data are fully utilized, active pre-judging and influence analysis are conducted on future transmission environments, and an advance is provided for follow-up transmission optimization control, so that optimization measures can be intervened in time before transmission quality is reduced. Compared with the traditional passive transmission control method, the method can sense the change of the transmission environment more accurately and timely, reduce the risks of transmission interruption and quality degradation, and improve the stability and reliability of transmission.
Further, the embodiment of the application further comprises:
The method comprises the steps of obtaining a sample transmission influence analysis result set according to the ultra-high definition video transmission quality monitoring record in the transmission environment and according to the influence degree of transmission quality under different sample environment characteristic data, training a transmission influence analyzer by adopting the sample environment characteristic data set and the sample transmission influence analysis result set, and obtaining a transmission influence analysis result by adopting the transmission influence analyzer to carry out transmission influence analysis on the prediction environment characteristic data.
In a preferred embodiment, a data-driven transmission influence analyzer is constructed by mining and learning historical transmission quality monitoring records, and future transmission quality change trends are automatically estimated and pre-warned according to predicted environmental characteristic data to obtain transmission influence analysis results.
Firstly, according to transmission quality monitoring equipment and a mechanism deployed in a transmission environment, ultra-high definition video transmission quality monitoring records in a period of history are obtained, wherein the ultra-high definition video transmission quality monitoring records comprise various key quality indexes in the transmission process, such as bit error rate, packet loss rate, signal-to-noise ratio, delay jitter and the like. And then, carrying out association alignment on the transmission quality monitoring records and the environmental characteristic data acquired at the same time, and researching the distribution rule and the affected degree of the transmission quality index under different environmental characteristic conditions through statistical analysis and data mining technology. On the basis, each historical environmental characteristic data sample is endowed with an influence degree label, and the influence of the environmental condition on transmission quality is described, so that a sample transmission influence analysis result set is obtained. Then, using the sample environmental feature data set as input, the sample transmission impact analysis result set as output tag, a transmission impact analyzer is constructed and trained, which may employ various supervised learning algorithms such as support vector machines, random forests, neural networks, etc. By inputting the sample environmental characteristic data into the transmission influence analyzer, the analysis result of the sample transmission influence is used as a supervision signal, and a complex mapping relation between the environmental characteristic and the transmission quality influence is learned. And then, inputting the obtained predicted environmental characteristic data into a trained transmission influence analyzer, automatically analyzing and estimating the transmission influence to obtain a transmission influence analysis result, indicating possible negative influence of transmission environmental change on transmission quality in a future period, and providing an important basis for transmission optimization.
Further, the embodiment of the application further comprises:
the method comprises the steps of obtaining the quantity of key frames in a plurality of influence frames according to the pre-marking of all video frames in the ultra-high definition video, obtaining the duty ratio of key contents in the plurality of influence frames, obtaining the duty ratio of the key frames and the duty ratio of the key contents, calculating the average value of the duty ratios of the key contents, obtaining the duty ratio of the whole key contents, collecting a sample key frame duty ratio set, a sample whole key content duty ratio set, setting marks according to the duty ratio of each sample key frame, obtaining a sample key frame compensation proportion set according to the duty ratio of each sample whole key frame, setting marks, obtaining the sample key content compensation proportion set according to the duty ratio of each sample whole key frame, training a key frame compensation proportion classification branch, adopting the sample whole key content duty ratio set and the sample key frame compensation proportion set, training a key content compensation proportion classification branch, combining the key frame compensation proportion classification branch, obtaining a compensation proportion classifier, adopting the whole key content duty ratio and the key frame duty ratio classification, and obtaining the key frame compensation proportion.
In one possible implementation, first, a plurality of video frames belonging to a key frame in a determined influence frame are rapidly filtered out according to the pre-marking of all video frames in the ultra-high definition video. The pre-marking is key frame indication metadata marked by manual or algorithm in the video making or encoding stage, and represents the importance degree of each video frame in global semantics. On the basis, the proportion of the number of key frames in the influence frames to the total influence frames is counted and used as the key frame duty ratio. Meanwhile, for each picture within the influence frame, a ratio of the number of pixels defined as the key content area in terms of the pre-mark to the total number of pixels is calculated as the key content duty ratio of the influence frame, resulting in a plurality of key content duty ratios. And then, carrying out arithmetic average on a plurality of key content duty ratios obtained by a plurality of influence frames to obtain an overall key content duty ratio, and describing the overall value density of video segment content with influenced transmission.
And then, sampling massive historical or simulated video transmission data to obtain a representative key frame duty ratio and overall key content duty ratio combined sample, and obtaining a sample key frame duty ratio set and a sample overall key content duty ratio set. And setting and marking the optimal key frame compensation proportion and key content compensation proportion for each sample ratio according to expert experience and experimental statistics to form a paired sample key frame compensation proportion set. The compensation proportion characterizes the relative duty ratio of key frames or key contents which need to be recovered in the transmission recovery process, and the higher the compensation proportion is, the more data redundancy and computing resources are needed to be input to ensure the recovery quality of the part of semantics. And then, constructing a compensation proportion classifier by using the sample key frame duty ratio set, the sample overall key content duty ratio set and the sample key frame compensation proportion set. The compensation proportion classifier comprises two parallel classifying branches, namely a key frame compensation proportion classifying branch and a key content compensation proportion classifying branch, which are respectively used for determining the key frame compensation proportion and the key content compensation proportion. Each branch can adopt a support vector machine, a decision tree, a neural network and other classification algorithms. And inputting the sample occupation ratio into a classification branch, taking the sample compensation ratio as a training target, and learning the corresponding relation between the occupation ratio and the optimal compensation ratio by a compensation ratio classifier. After the two branches are independently trained, the two branches are combined and integrated to form the complete compensation proportion classifier. And then, inputting the obtained key frame duty ratio and the whole key content duty ratio which actually affect the video clips into a trained compensation proportion classifier to obtain corresponding key frame compensation proportion and key content compensation proportion.
Further, the embodiment of the application further comprises:
The method comprises the steps of monitoring transmission data when the ultra-high definition video is transmitted in an influence time axis, carrying out packet loss detection comparison with standard data to obtain lost frames and lost contents, training a frame compensator and a content compensator according to the transmission lost data of the ultra-high definition video in historical time, respectively comprising M frame compensation branches and N content compensation branches, wherein M and N are integers larger than 1, calculating and rounding according to the key content compensation proportion and the key frame compensation proportion to obtain the number of frame compensation branches and the number of content compensation branches, inputting frame compensation branches of the number of frame compensation branches in the frame compensator before and after the lost frames to carry out frame compensation generation to obtain a plurality of branch generation frames, carrying out fusion processing to obtain the generated frames, inputting content compensation branches of the number of content compensation branches in the content compensator where the lost contents are successfully transmitted to carry out content generation to obtain a plurality of branch generation contents, and carrying out fusion processing to obtain the generated contents.
In a preferred embodiment, first, transmission data of the ultra-high definition video of the E-band is monitored in real time within a determined influence time axis. And positioning the data loss condition caused by channel degradation in the transmission process by utilizing an error detection algorithm through comparing the received transmission data with the standard data sent by the video coding side. The data loss may occur at different granularity levels, and may be the loss of the whole video frame or the loss of the content of a local area in the frame. By recording information such as frame index, intra-frame coordinate area and the like of each lost position in detail in the detection process, a structured lost frame and lost content are formed, and accurate positioning reference is provided for subsequent targeted repair.
Then, a multi-branch, end-to-end frame compensator and a content compensator are constructed based on the historical video transmission data. The frame compensator comprises M frame compensation branches, the content compensator comprises N content compensation branches, and M and N are integers larger than 1. And then, dynamically determining the number of frame compensation branches and content compensation branches actually participating in the repair task according to the obtained key content compensation proportion and key frame compensation proportion. Specifically, the compensation ratio is multiplied by the total number of branches M and N to obtain the enabled frame compensation branch number and content compensation branch number. The higher the compensation proportion is, the larger the influence of the lost content on the video semantics is represented, more repair branches are required to be called, and more calculation resources are input so as to ensure the quality of repair reconstruction.
And then, inputting the detected frame data of a plurality of adjacent frames before and after the lost frame, as well as the context information such as the time stamp, the coding parameters and the like of the lost frame into a certain number of frame compensation branches, and executing the frame repair task in parallel. Each branch independently estimates and generates a complete repair frame, representing one possible repair result. And then, comprehensively considering factors such as weight coefficients of all branches, quality scores of generated frames and the like, and obtaining a final generated frame by fusion modes such as weighted average or voting and the like to be used as substitution and filling of lost frames. And meanwhile, focusing on the incomplete frame where the lost content is, inputting partial content area which is successfully transmitted together with prior information such as the position, the size, the peripheral correlation and the like of the lost content into a determined number of content compensation branches, and executing the content repair task in parallel. Each branch propagates semantic information from different directions and scales by utilizing the context clues provided by the survivor area, and gradually fills and refines the image content of the lost area to obtain a natural and coherent repair result. Similarly, the content restoration results generated by each branch are fused to obtain final generated content, and the final generated content is seamlessly embedded into the original incomplete frame to obtain a complete restored video frame.
Through self-adaptive video transmission repair, the number and combination of repair branches are flexibly adjusted according to the semantic importance of lost content, repair reconstruction work is purposefully carried out on two dimensions of a coded frame and a content area, and the estimation and recovery capacity of lost data are improved.
Further, the embodiment of the application further comprises:
The method comprises the steps of acquiring a frame data set before and after a sample and a sample lost frame data set according to transmission lost data of ultra-high definition video in historical time, constructing M frame compensation branches based on a generation countermeasure network, wherein each frame compensation branch comprises a generation network and a countermeasures network, randomly selecting M frames in the frame data set before and after the sample and the sample lost frame data set to generate training data, training the M frame compensation branches respectively until convergence to obtain M frame compensation branches after training is completed, and obtaining a frame compensator, wherein intersection data exists in the M frame generation training branches, acquiring a sample successful transmission content set and a sample lost content set according to the transmission lost data of ultra-high definition video in historical time, and constructing and training the N content compensation branches by adopting the sample successful transmission content set and the sample lost content set to obtain the content compensator.
In a preferred embodiment, first, two sets of training samples are collected based on transmission loss data monitored in a historical ultra-high definition video transmission record. One type is a sample front-to-back frame data set, which contains a large number of consecutive frames, paired, that are adjacent to each other, that reflect the time domain context information when the loss occurred. The other is a sample lost frame data set containing a large number of lost frames themselves as references for repair reconstruction. These two types of sample sets form the data basis for training the frame compensator, including rich loss patterns and content features. Then, based on the generation of the countermeasure network, a multi-branched frame compensator network structure is designed, which consists of M parallel frame compensation branches, each of which contains one generation network and one countermeasure network. The contrast network receives the output of the generating network and the original real lost frame, learns and judges the difference between the generating frame and the real frame through a contrast training mode, and guides the generating network to continuously improve the repair quality, so that the generating frame and the real frame are difficult to distinguish visually. The multiple branches adopt different network structures and parameter configurations, represent complementary repair strategies and characteristic characterization capabilities, mine and utilize the context information of the lost frames from different angles, and strengthen the diversity and robustness of the repair process. And then, utilizing the collected front and back frame data sets of the sample and the sample lost frame data set to construct M independent end-to-end frame compensation branches. Specifically, for M frame compensation branches, M different subsets of training data are randomly selected from the sample front and back frame data sets and the sample lost frame data set, each subset containing a certain number of front and back frame-lost frame pairing samples. Wherein the M training data subsets are not completely independent, but have a certain degree of intersection and overlap so as to promote knowledge sharing and collaborative learning among different branches. And then, respectively carrying out end-to-end generation countermeasure training on the corresponding frame compensation branches based on each training data subset, namely inputting the front frame and the back frame into a generation network, inputting the output of the generation network and the original lost frame into the countermeasure network, and jointly optimizing parameters of the generation network and the countermeasure network by minimizing the countermeasure loss between the generation frame and the original frame, so that the generation network can restore the lost frame as vividly as possible, and meanwhile, the countermeasure network can accurately judge the authenticity of a generation result. The training processes of the M branches are mutually independent, high-efficiency parallelism is realized, training speed and resource utilization rate are remarkably improved, and meanwhile diversity and complementarity among the branches are enhanced through sharing samples. And after full training until convergence, M frame compensation branches after training are obtained to form a complete and diversified frame compensator.
Meanwhile, aiming at the content-level lost repair task, two types of key training sample sets are further collected according to historical transmission lost data. One type is a sample successful transmission content set, which contains a large number of successfully received, intact video frame content blocks, reflecting the local visual semantic information of successful transmission in the transmission process. The other is a sample lost content set containing local video content blocks with data loss that are adjacent to the successfully transmitted content within the same frame. The two types of sample sets form a data base for training the content compensator, and the data base comprises rich local semantic relativity and data missing modes, so that necessary priori knowledge and reference information are provided for regional content restoration. Subsequently, N parallel content compensation branches of the content compensator are constructed and trained using the sample successful transmission content set and the sample missing content set. Similar to the frame compensation branches, each content compensation branch also employs an end-to-end structure based on a deep convolutional neural network, but focuses on learning semantic characterization and generation capabilities of local region content. Specifically, the successfully transmitted content is taken as input, advanced semantic features are extracted through an encoder network, and the features are restored into complete content blocks through a decoder network. And comparing the restored content block with the original lost content block, calculating reconstruction loss, and taking the reconstruction loss as an optimization target of a training content compensation branch. Meanwhile, in order to further improve the sense of reality and consistency of content compensation, attention mechanisms, countermeasure training and other technologies can be introduced into the content compensation branches, and the relevance modeling and the generation capability of texture details among local areas are enhanced. The N content compensation branches are trained independently of each other, and knowledge generalization and robustness are enhanced by sharing sample data. And then N content compensation branches after training are obtained to form a complete and diversified content compensator.
Further, the embodiment of the application further comprises:
The method comprises the steps of obtaining lost proportion according to the lost frame, lost content and a plurality of influence picture contents in a plurality of influence frames, obtaining key proportion according to the whole key content proportion and the key frame proportion by weighting calculation, obtaining quality loss proportion according to the lost proportion and the key proportion by calculating, and identifying the generated frame and the generated content as quality loss identification parameters.
In one possible implementation, first, an index reflecting the overall transmission loss duty is calculated based on the obtained statistics of lost frames and lost content, and the determined overall size of the affected video segments. Specifically, a calculation mode of the proportion of the lost frame number to the total number of the influence frames and the proportion of the lost content block area to the total area of the influence video fragments is adopted to obtain the loss proportion representing the global loss level. For example, multiplying the lost frame rate by the lost content rate yields a composite lost rate value, with a larger value indicating that the more serious the data loss in this transmission, the greater the overall impact on video quality. And then, evaluating the influence degree of transmission loss on the video semantic integrity according to the obtained overall key content duty ratio and key frame duty ratio. Because the contribution degree of different video contents to the perceived quality and the semantic expression is different, the damage of the lost key frame and the key content to the video quality is far greater than that of the common frame and the content, and therefore the two types of semantic importance indexes are weighted to highlight the content value of the lost data. For example, a preset weight coefficient is given to the overall key content duty ratio and the key frame duty ratio, and then weighted summation is performed to obtain an integrated key proportion. The larger the key ratio, the more semantic key information is contained in the data lost by the transmission, and the more serious the influence on the integrity and the quality of the visual content is. And then, comprehensively considering the obtained loss proportion and the obtained key proportion, and calculating a quality loss identification parameter as a basis for intuitively quantifying the transmission quality degradation degree. For example, the loss scale and the key scale are weighted together, and the configuration of the weight coefficients reflects the relative importance of the global loss level and the local semantic importance to the impact on video quality. The resulting mass loss ratio is expressed in terms of a percentage, visually representing the relative magnitude of the transmission quality impairment.
The semantic importance of the lost content is focused while the total amount of the lost data is considered, so that the quality loss identification parameters are obtained, the negative influence of the transmission link deterioration on the video quality is intuitively quantized, and a reference scale is provided for judging the transmission performance and the fault tolerance recovery effect. Meanwhile, the quality loss identification parameters are embedded into the metadata of the repair video, so that the quality loss identification parameters can be transmitted and stored together with the video, a complete quality information identification and tracking mechanism is constructed, and subsequent quality control and optimization processing is greatly facilitated.
In summary, the communication method for E-band transmission of ultra-high definition video provided by the embodiment of the application has the following technical effects:
The method comprises the steps of monitoring and collecting environmental characteristic data in a transmission environment for performing ultra-high definition video transmission based on E-band transmission, predicting the environmental characteristic data and analyzing transmission influence to obtain transmission influence analysis results, finding out environmental factors causing transmission quality to be reduced, providing decision basis for subsequent transmission compensation, obtaining an influence time axis of the transmission influence analysis results when the transmission influence analysis results are larger than a preset transmission influence threshold, indexing a plurality of influence picture contents in a plurality of influence frames affected by the transmission in the ultra-high definition video, indexing the influenced time axis, extracting video frames in the time axis, reducing the range of subsequent analysis and compensation, improving the processing efficiency, performing key frame analysis and key content analysis according to the plurality of influence picture contents, obtaining a plurality of key content occupation ratios and key frame occupation ratios, calculating and obtaining overall key content occupation ratios, deciding and obtaining key content compensation ratios according to the overall key content occupation ratios and the key frame occupation ratios, determining the compensation key content compensation ratios in a targeted manner, and optimizing the compensation strategies. When transmission is carried out in the influence time axis, transmission data are monitored, lost frames and lost contents are obtained, compensation generation of the lost frames and the lost contents is carried out according to the key content compensation proportion and the key frame compensation proportion, generated frames and generated contents are obtained, key video contents with damaged transmission are dynamically repaired, influence of transmission environment is reduced to the greatest extent, and video transmission quality is guaranteed. According to the loss proportion of the lost frames and the lost contents, the overall key content duty ratio and the key frame duty ratio are combined, the quality loss identification parameters are decided to be obtained, the generated frames and the generated contents are identified, transmission data are added for transmission display, the condition that transmission is affected is intuitively reflected, and a quantifiable reference is provided for the follow-up optimization transmission scheme and quality evaluation, so that the technical effects of improving the video transmission quality and reducing the key frame and key content loss when the ultra-high definition video is transmitted in an E band are achieved.
In a second embodiment, based on the same inventive concept as the communication method for E-band transmission of ultra-high definition video in the previous embodiment, as shown in fig. 2, an embodiment of the present application provides a communication platform for E-band transmission of ultra-high definition video, the platform comprising:
The transmission influence analysis module 11 is used for monitoring and collecting environmental characteristic data in a transmission environment for performing ultra-high definition video transmission based on E-band transmission, predicting the environmental characteristic data and analyzing the transmission influence to obtain a transmission influence analysis result;
the influence identification module 12 is configured to obtain an influence time axis of the transmission influence analysis result when the transmission influence analysis result is greater than a preset transmission influence threshold, and index the transmission super-high definition video to obtain a plurality of influence picture contents in a plurality of influenced influence frames;
The compensation proportion obtaining module 13 is configured to perform key frame analysis and key content analysis according to the plurality of influence picture contents, obtain a plurality of key content duty ratios and key frame duty ratios, calculate and obtain an overall key content duty ratio, and decide to obtain a key content compensation proportion and a key frame compensation proportion according to the overall key content duty ratio and the key frame duty ratio;
The compensation generation module 14 is configured to monitor transmission data, obtain a lost frame and a lost content when the transmission is performed in the influence time axis, perform compensation generation of the lost frame and the lost content according to the key content compensation proportion and the key frame compensation proportion, and obtain a generated frame and a generated content;
And the quality loss identification module 15 is used for combining the overall key content duty ratio and the key frame duty ratio according to the loss proportion of the lost frames and the lost contents, deciding to obtain quality loss identification parameters, identifying the generated frames and the generated contents, and adding transmission data for transmission display.
Further, the transmission impact analysis module 11 includes the following steps:
Monitoring and collecting environmental characteristic data in a transmission environment for performing ultra-high definition video transmission based on E-band transmission;
based on the historical environmental characteristic data monitoring record in the transmission environment, collecting a sample environmental characteristic data set, and collecting environmental characteristic data after a preset time length, and marking the environmental characteristic data as a sample prediction environmental characteristic data set;
Training an environmental characteristic data predictor by adopting the sample environmental characteristic data set and the sample prediction environmental characteristic data set, and predicting the environmental characteristic data to obtain predicted environmental characteristic data;
And carrying out transmission influence analysis according to the predicted environmental characteristic data to obtain a transmission influence analysis result.
Further, the transmission impact analysis module 11 further includes the following steps:
According to the ultra-high definition video transmission quality monitoring record in the transmission environment and according to the influence degree of the transmission quality under different environmental characteristic data, labeling to obtain a sample transmission influence analysis result set;
training a transmission influence analyzer by adopting the sample environment characteristic data set and the sample transmission influence analysis result set;
and adopting the transmission influence analyzer to perform transmission influence analysis on the predicted environment characteristic data to obtain a transmission influence analysis result.
Further, the compensation ratio acquisition module 13 includes the following steps:
Screening and obtaining the number of key frames in the influence frames according to the pre-marking of all video frames in the ultra-high definition video, and obtaining the duty ratio of key contents in the influence frames to obtain the duty ratio of the key frames and the duty ratio of the key contents;
calculating the average value of the key content duty ratios to obtain the overall key content duty ratio;
Collecting a sample key frame proportion set and a sample integral key content proportion set, setting labels to obtain a sample key frame compensation proportion set according to the sample key frame proportion, and setting labels to obtain a sample key content compensation proportion set according to the sample integral key content proportion;
Training a key frame compensation proportion classification branch by adopting the sample key frame duty ratio set and the sample key frame compensation proportion set, training a key content compensation proportion classification branch by adopting the sample overall key content duty ratio set and the sample key frame compensation proportion set, and combining the key frame compensation proportion classification branch to obtain a compensation proportion classifier;
And classifying the overall key content duty ratio and the key frame duty ratio by adopting the compensation proportion classifier to obtain a key content compensation proportion and a key frame compensation proportion.
Further, the compensation generating module 14 includes the following steps:
when the ultra-high definition video is transmitted in the influence time axis, monitoring transmission data, and carrying out packet loss detection comparison with standard data to obtain a lost frame and lost content;
Training a frame compensator and a content compensator according to transmission lost data of the ultra-high definition video in the historical time, wherein the frame compensator and the content compensator respectively comprise M frame compensation branches and N content compensation branches, and M and N are integers larger than 1;
Calculating the whole according to the key content compensation proportion and the key frame compensation proportion by combining M and N to obtain the number of frame compensation branches and the number of content compensation branches;
inputting the front and rear frame data of the lost frame into frame compensation branches with the number of frame compensation branches in the frame compensator to perform frame compensation generation to obtain a plurality of branch generation frames, and performing fusion processing to obtain the generated frames;
and inputting the successfully transmitted content in the frame where the lost content is located into the content compensation branches of the content compensation branch number in the content compensator to generate the content, obtaining a plurality of branch generated content, and obtaining the generated content through fusion processing.
Further, the compensation generating module 14 further includes the following steps:
acquiring a front frame data set, a rear frame data set and a sample lost frame data set of a sample according to transmission lost data of the ultra-high definition video in the historical time;
constructing the M frame compensation branches based on generating an countermeasure network, each frame compensation branch including a generation network and the countermeasure network;
randomly selecting M frames in the sample front and back frame data set and the sample lost frame data set to generate training data, respectively training the M frame compensation branches until convergence to obtain M frame compensation branches after training is completed, and obtaining a frame compensator, wherein intersection data exists in the M frame generation training data;
According to the transmission lost data of the ultra-high definition video in the historical time, acquiring a sample successful transmission content set and a sample lost content set;
And adopting the sample successful transmission content set and the sample lost content set to construct and train the N content compensation branches to obtain the content compensator.
Further, the quality loss identification module 15 includes the following steps:
Calculating to obtain a loss proportion according to the lost frame, the lost content and a plurality of influence picture contents in a plurality of influence frames;
Obtaining a key proportion by weighting calculation according to the integral key content proportion and the key frame proportion;
And calculating and obtaining a quality loss proportion according to the loss proportion and the key proportion, and marking the generated frames and the generated contents as quality loss marking parameters.
Any of the steps of the methods described above may be stored as computer instructions or programs in a non-limiting computer memory and may be called by a non-limiting computer processor to identify any method for implementing an embodiment of the present application, without unnecessary limitations.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present application without departing from the scope of the application. Thus, it is intended that the present application cover the modifications and variations of this application provided they come within the scope of the application and its equivalents.
Claims (8)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411606334.3A CN119135959B (en) | 2024-11-12 | 2024-11-12 | Communication method and platform for ultra-high definition video E-band transmission |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411606334.3A CN119135959B (en) | 2024-11-12 | 2024-11-12 | Communication method and platform for ultra-high definition video E-band transmission |
Publications (2)
Publication Number | Publication Date |
---|---|
CN119135959A CN119135959A (en) | 2024-12-13 |
CN119135959B true CN119135959B (en) | 2025-01-21 |
Family
ID=93762607
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202411606334.3A Active CN119135959B (en) | 2024-11-12 | 2024-11-12 | Communication method and platform for ultra-high definition video E-band transmission |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN119135959B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108174239A (en) * | 2017-12-04 | 2018-06-15 | 中国联合网络通信集团有限公司 | A video transmission method and device |
CN114900662A (en) * | 2022-05-11 | 2022-08-12 | 重庆紫光华山智安科技有限公司 | Method, system, device and medium for determining video stream transmission quality information |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101918935B (en) * | 2007-12-05 | 2014-05-14 | 欧乐2号公司 | Video compression system and method for reducing effects of packet loss over communication channel |
-
2024
- 2024-11-12 CN CN202411606334.3A patent/CN119135959B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108174239A (en) * | 2017-12-04 | 2018-06-15 | 中国联合网络通信集团有限公司 | A video transmission method and device |
CN114900662A (en) * | 2022-05-11 | 2022-08-12 | 重庆紫光华山智安科技有限公司 | Method, system, device and medium for determining video stream transmission quality information |
Also Published As
Publication number | Publication date |
---|---|
CN119135959A (en) | 2024-12-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112019932B (en) | Network fault root cause positioning method and device, computer equipment and storage medium | |
US20180053110A1 (en) | Method of predicting crime occurrence in prediction target region using big data | |
CN111652290B (en) | Method and device for detecting countermeasure sample | |
CN112529210A (en) | Model training method, device and computer readable storage medium | |
KR102540027B1 (en) | Power control system and method characterized in that possible to predict power demand and detect abnormalities | |
CN113128412A (en) | Fire trend prediction method based on deep learning and fire monitoring video | |
CN118608876B (en) | GIS Partial Discharge Pattern Recognition Model and Its Training Method | |
CN118262410A (en) | Dangerous human behavior recognition analysis early warning monitoring system and method | |
CN117436295A (en) | Material monitoring and 3D simulation system and method based on digital twinning | |
CN117315331A (en) | Dynamic graph anomaly detection method and system based on GNN and LSTM | |
CN119135959B (en) | Communication method and platform for ultra-high definition video E-band transmission | |
CN113807452B (en) | Business process abnormality detection method based on attention mechanism | |
CN101789091A (en) | System and method for automatically identifying video definition | |
CN111553408B (en) | Automatic test method for video recognition software | |
CN111901344B (en) | Method and device for detecting CAN bus abnormal frame | |
CN110738638A (en) | Applicability prediction and performance blind evaluation method of visual saliency detection algorithm | |
CN118071165B (en) | Water affair data visualization realization method based on big data analysis and digital platform | |
CN118643871B (en) | Neural network adaptive modulation method and system for optical communication device | |
JP7603887B2 (en) | Information processing device, program, and information processing method | |
CN117807383B (en) | Channel state information recovery method and device, equipment and storage medium | |
CN119295060B (en) | Predictive maintenance management method and system for industrial robot | |
CN119420629A (en) | Microservice fault root cause positioning method based on graph convolution neural network | |
Karimaa | Efficient video surveillance: performance evaluation in distributed video surveillance systems | |
CN119485426A (en) | Base station hidden danger prediction method, device, equipment, medium and program product | |
CN119028013A (en) | A tracking and monitoring system based on artificial intelligence |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |