CN111861862B - Bitmap data processing method and device of image processing network and computer equipment - Google Patents
Bitmap data processing method and device of image processing network and computer equipment Download PDFInfo
- Publication number
- CN111861862B CN111861862B CN202010595853.XA CN202010595853A CN111861862B CN 111861862 B CN111861862 B CN 111861862B CN 202010595853 A CN202010595853 A CN 202010595853A CN 111861862 B CN111861862 B CN 111861862B
- Authority
- CN
- China
- Prior art keywords
- bitmap data
- image processing
- size
- network
- processing network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/20—Processor architectures; Processor configuration, e.g. pipelining
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/063—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/60—Memory management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- General Health & Medical Sciences (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Neurology (AREA)
- Image Processing (AREA)
Abstract
The application relates to a bitmap data processing method, a bitmap data processing device and computer equipment of an image processing network. The bitmap data processing method of the image processing network comprises the following steps: acquiring first bitmap data; extracting from the first bitmap dataSecond bitmap data of pixel size and writing the second bitmap data to a pixel that is aligned by four bytesA memory space of a size, wherein,Representing an upward rounding; from the slaveAnd intercepting third bitmap data with the pixel size of MxM from the memory space with the size, and taking the third bitmap data as bitmap data input into an image processing network, wherein the pixel size of the bitmap data required to be input into the image processing network is MxM. The application solves the problem of low operation efficiency of the image processing network in the related technology and improves the operation efficiency of the image processing network.
Description
Technical Field
The present application relates to the field of image processing, and in particular, to a bitmap data processing method, apparatus, computer device, and computer readable storage medium for an image processing network.
Background
The related art image processing network takes SiamRPN as an example, the SiamRPN network combines a twin network and a regional recommendation network, and the adaptation of a tracking target in the bitmap data can be realized through the twin network, so that an algorithm can utilize the information of the tracked target to finish the initialization of a detector; the tracking target position in the bitmap data can be accurately predicted by the algorithm through the regional recommendation network. Through a combination of both, siamRPN networks can perform end-to-end training.
In the image processing process of SiamRPN networks, the SiamRPN networks input the obtained two bitmap data with the size of 127 multiplied by 127 pixels and the size of 255 multiplied by 255 into the twin networks for processing, and then the acquired two bitmap data are processed by the convolutional neural network in the regional recommendation network, so that the tracking of the target is realized.
In the research process, the SiamRPN network has low operation efficiency and high requirement on hardware resources.
Aiming at the problems of low operation efficiency of an image processing network and high requirement on hardware resources in the related technology, no effective solution is proposed yet.
Disclosure of Invention
The embodiment of the application provides a bitmap data processing method, device, system, computer equipment and computer readable storage medium of an image processing network, which are used for at least solving the problem of low operation efficiency of the image processing network in the related technology.
In a first aspect, an embodiment of the present application provides a bitmap data processing method of an image processing network, including:
acquiring first bitmap data;
extracting from the first bitmap data Second bitmap data of pixel size and writing the second bitmap data to a memory device aligned in four bytesA memory space of a size, wherein,Representing an upward rounding;
From the said And intercepting third bitmap data with the pixel size of MxM from the memory space with the size, and taking the third bitmap data as bitmap data input into an image processing network, wherein the pixel size of the bitmap data required to be input into the image processing network is MxM.
In some embodiments, the first bitmap data is bitmap data in YUV format; the extraction from the first bitmap dataThe second bitmap data of the pixel size includes:
Converting the first bitmap data into bitmap data in an RGB format;
extracting the bitmap data in the RGB format Second bitmap data of pixel size.
In some embodiments, the third bitmap data is bitmap data in YUV format; said slave saidIntercepting third bitmap data with the size of M multiplied by M pixels in a memory space with the size, and taking the third bitmap data as bitmap data of an input image processing network, wherein the third bitmap data comprises:
From the said Intercepting third bitmap data of M multiplied by M pixel size in the memory space of the size;
converting the third bitmap data into bitmap data in an RGB format, and taking the bitmap data in the RGB format as bitmap data of an input image processing network.
In some of these embodiments, the image processing network comprises a SiamRPN network.
In some of these embodiments, the SiamRPN network includes a detection network; the extraction from the first bitmap dataSecond bitmap data of pixel size and writing the second bitmap data to a memory device aligned in four bytesThe memory space of the size includes:
Extracting bitmap data of 128×128 pixel size from the first bitmap data, and writing the bitmap data of 128×128 pixel size into a memory space of 128×128 byte size aligned by four bytes;
Said slave said Intercepting third bitmap data with the size of M multiplied by M pixels in a memory space with the size, and taking the third bitmap data as bitmap data of an input image processing network, wherein the third bitmap data comprises:
And intercepting bitmap data with 127×127 pixel size from the memory space with 128×128 byte size, and taking the bitmap data with 127×127 pixel size as bitmap data input into the template network.
In some of these embodiments, the SiamRPN network includes a detection network; the extraction from the first bitmap dataSecond bitmap data of pixel size and writing the second bitmap data to a memory device aligned in four bytesThe memory space of the size includes:
extracting 256×256-pixel-sized bitmap data from the first bitmap data, and writing the 256×256-pixel-sized bitmap data into a 256×256-byte-sized memory space aligned in four bytes;
Said slave said Intercepting third bitmap data with the size of M multiplied by M pixels in a memory space with the size, and taking the third bitmap data as bitmap data of an input image processing network, wherein the third bitmap data comprises:
And intercepting bitmap data with 255×255 pixels from the memory space with 256×256 bytes, and taking the bitmap data with 255×255 pixels as bitmap data input into the detection network.
In some of these embodiments, the image processing network comprises a SiamRPN network; the SiamRPN network includes a plurality of convolutional layers, wherein at least one of the plurality of convolutional layers is replaced with or connected to an ARM Neon-optimized convolutional layer.
In a second aspect, an embodiment of the present application provides a bitmap data processing apparatus of an image processing network, including:
the acquisition module is used for acquiring the first bitmap data;
an extracting module for extracting the first bitmap data Second bitmap data of pixel size and writing the second bitmap data to a memory device aligned in four bytesA memory space of a size, wherein,Representing an upward rounding;
An intercepting module for intercepting the data from the And intercepting third bitmap data with the pixel size of MxM from the memory space with the size, and taking the third bitmap data as bitmap data input into an image processing network, wherein the pixel size of the bitmap data required to be input into the image processing network is MxM.
In a third aspect, an embodiment of the present application provides a computer device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the bitmap data processing method of the image processing network according to the first aspect when executing the computer program.
In a fourth aspect, an embodiment of the present application provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a bitmap data processing method of an image processing network as described in the first aspect above.
Compared with the related art, the bitmap data processing method, the bitmap data processing device, the computer device and the computer readable storage medium of the image processing network provided by the embodiment of the application acquire the first bitmap data; extracting from the first bitmap dataSecond bitmap data of pixel size and writing the second bitmap data to a pixel that is aligned by four bytesA memory space of a size, wherein,Representing an upward rounding; from the slaveIntercepting third bitmap data with the size of M multiplied by M pixels in a large memory space, and taking the third bitmap data as bitmap data input into an image processing network, wherein the mode that the pixel size of the bitmap data required to be input by the image processing network is M multiplied by M solves the problem of low operation efficiency of the image processing network in the related art, and improves the operation efficiency of the image processing network.
The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below to provide a more thorough understanding of the other features, objects, and advantages of the application.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
Fig. 1 is a flowchart of a bitmap data processing method of an image processing network according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of an ARM Neon-based optimized SiamRPN network according to an embodiment of the present application;
FIG. 3 is a flow chart of a related art SiamRPN network-based bitmap data processing method;
FIG. 4 is a flowchart of a bitmap data processing method of SiamRPN network in accordance with a preferred embodiment of the present application;
fig. 5 is a block diagram of a configuration of a bitmap data processing apparatus of an image processing network according to an embodiment of the present application;
Fig. 6 is a schematic diagram of a hardware configuration of a bitmap data processing apparatus of an image processing network according to an embodiment of the present application.
Detailed Description
The present application will be described and illustrated with reference to the accompanying drawings and examples in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application. All other embodiments, which can be made by a person of ordinary skill in the art based on the embodiments provided by the present application without making any inventive effort, are intended to fall within the scope of the present application.
It is apparent that the drawings in the following description are only some examples or embodiments of the present application, and it is possible for those of ordinary skill in the art to apply the present application to other similar situations according to these drawings without inventive effort. Moreover, it should be appreciated that while such a development effort might be complex and lengthy, it would nevertheless be a routine undertaking of design, fabrication, or manufacture for those of ordinary skill having the benefit of this disclosure, and thus should not be construed as having the benefit of this disclosure.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is to be expressly and implicitly understood by those of ordinary skill in the art that the described embodiments of the application can be combined with other embodiments without conflict.
Unless defined otherwise, technical or scientific terms used herein should be given the ordinary meaning as understood by one of ordinary skill in the art to which this application belongs. The terms "a," "an," "the," and similar referents in the context of the application are not to be construed as limiting the quantity, but rather as singular or plural. The terms "comprising," "including," "having," and any variations thereof, are intended to cover a non-exclusive inclusion; for example, a process, method, system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to only those steps or elements but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. The terms "connected," "coupled," and the like in connection with the present application are not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. The term "plurality" as used herein means two or more. "and/or" describes an association relationship of an association object, meaning that there may be three relationships, e.g., "a and/or B" may mean: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship. The terms "first," "second," "third," and the like, as used herein, are merely distinguishing between similar objects and not representing a particular ordering of objects.
The method embodiments provided by the present embodiments may be performed in a computer device. The bitmap data processing method of the image processing network according to the embodiment of the present application will be described and explained below by taking a computer device as an example.
The embodiment provides a bitmap data processing method of an image processing network. Fig. 1 is a flowchart of a bitmap data processing method of an image processing network according to an embodiment of the present application, as shown in fig. 1, the flowchart including the steps of:
in step S101, the computer device acquires first bitmap data.
In this step, the first bitmap data may be bitmap data acquired by the computer device in real time, or may be bitmap data acquired from a bitmap database of the computer device.
Step S102, the computer device extracts from the first bitmap dataSecond bitmap data of pixel size and writing the second bitmap data to a pixel that is aligned by four bytesA memory space of a size, wherein,Representing an upward rounding.
In this step, the computer device extracts directly from the first bitmap dataSecond bitmap data of pixel size and writing the second bitmap data to a pixel that is aligned by four bytesThe large memory space can avoid the operation of memory alignment for bitmap data when the bitmap data is input into an image processing network, reduces the related calculation of the memory alignment, optimizes the memory use of the computer equipment, and improves the performance of the computer equipment.
And in the present embodiment by writing the second bitmap data to a memory that is aligned by four bytes The memory space is large, so that step S203 is convenient to directly intercept bitmap data required to be input by the image processing network from the memory space.
Step S103, the computer device slaveAnd intercepting third bitmap data with the pixel size of MxM from the memory space with the size, and taking the third bitmap data as bitmap data input into an image processing network, wherein the pixel size of the bitmap data required to be input into the image processing network is MxM.
In this step, the image processing network may be any network requiring memory alignment in the related art, for example, the image processing network includes, but is not limited to, one of the following: siamRPN networks, siamRPN ++ networks, DASIAMRPN networks.
Through steps S101 to S103, the computer device may directly intercept bitmap data having the same pixel size as the bitmap data required to be input by the image processing network from the memory space according to the pixel size of the bitmap data required to be input by the image processing network, so that the memory space with the increased wide and high span alignment does not participate in the calculation of the image processing network, and the situation that the increased memory space in the bitmap data also needs to participate in the calculation of the image processing network after the memory alignment in the related art is avoided.
For image displays, the images are typically displayed in RGB format, while YUV format is typically used when transmitting image data, since using YUV format for image data transmission saves bandwidth.
The three RGB letters represent Red (Red), green (Green), and Blue (Blue), respectively, and these three colors are called three primary colors, and they are added in different ratios, so that various colors can be generated. YUV is a color coding method commonly used in various video processing components, where "Y" represents brightness (luminence or Luma), that is, gray scale values, "U" and "V" represent chrominance (Chrominance or Chroma), which are used to describe image colors and saturation for specifying the colors of pixels.
Therefore, in some embodiments, in order to save bandwidth, the computer device generally collects bitmap data in YUV format when acquiring the bitmap data, or converts the collected bitmap data in other formats into YUV data, and converts the bitmap in YUV format into RGB format data when processing and displaying the image data.
In the embodiment of the application, in order to facilitate the subsequent bitmap data processing of the image processing network, the bitmap data in the YUV format needs to be converted into the corresponding RGB format.
For example, in some of these embodiments, the first bitmap data is bitmap data in YUV format; the computer device extracts from the first bitmap dataThe second bitmap data of the pixel size includes: converting the first bitmap data into bitmap data in an RGB format; extraction from bitmap data in RGB formatSecond bitmap data of pixel size.
In this embodiment, the computer device ensures that the bitmap data input to the image processing network is in RGB format by converting the first bitmap data into the bitmap data in RGB format, so that the subsequent operation of the image processing network is facilitated.
For another example, in some of these embodiments, the third bitmap data is bitmap data in YUV format; computer equipment slaveIntercepting third bitmap data with the size of M multiplied by M pixels in a memory space with the size, and taking the third bitmap data as bitmap data input into an image processing network comprises: from the slaveIntercepting third bitmap data of M multiplied by M pixel size in the memory space of the size; the third bitmap data is converted into bitmap data in RGB format, and the bitmap data in RGB format is used as bitmap data of the input image processing network.
In this embodiment, the computer device ensures that the data format of the third bitmap data input to the image processing network is RGB by converting the third bitmap data input to the image processing network into RGB format, thereby ensuring that the bitmap data input to the image processing network is RGB format, and facilitating the subsequent operation of the image processing network.
The description and illustration is made below with reference to the accompanying drawings and taking an example in which the image processing network includes a SiamRPN network.
In some of these embodiments, siamRPN networks include a detection network; the computer device extracts from the first bitmap dataSecond bitmap data of pixel size and writing the second bitmap data to a pixel that is aligned by four bytesThe memory space of the size includes: extracting bitmap data of 128×128 pixel size from the first bitmap data, and writing the bitmap data of 128×128 pixel size into a memory space of 128×128 byte size aligned by four bytes; from the slaveIntercepting third bitmap data with the size of M multiplied by M pixels in a memory space with the size, and taking the third bitmap data as bitmap data input into an image processing network comprises: bitmap data of 127×127 pixels is truncated from a memory space of 128×128 bytes in size, and is used as bitmap data of an input template network. In this embodiment, by the above manner, the computer device is prevented from performing the operation of memory alignment before inputting the bitmap data to the template network in the SiamRPN network, and meanwhile, the memory use of the SiamRPN network in the computer device is optimized, so that the performance of the computer device is improved.
In some of these embodiments, siamRPN networks include a detection network; the computer device extracts from the first bitmap dataSecond bitmap data of pixel size and writing the second bitmap data to a pixel that is aligned by four bytesThe memory space of the size includes: extracting 256×256-pixel-sized bitmap data from the first bitmap data, and writing the 256×256-pixel-sized bitmap data into a 256×256-byte-sized memory space aligned in four bytes; from the slaveIntercepting third bitmap data with the size of M multiplied by M pixels in a memory space with the size, and taking the third bitmap data as bitmap data input into an image processing network comprises: bitmap data of 255×255 pixels is cut out from a memory space of 256×256 bytes, and the bitmap data of 255×255 pixels is used as bitmap data of the input detection network.
In this embodiment, by the above manner, the computer device is prevented from performing the operation of memory alignment before inputting the bitmap data to the detection network in the SiamRPN network, and meanwhile, the memory usage of the SiamRPN network in the computer device is optimized, so that the performance of the computer device is improved.
Based on the above embodiment, after the bitmap data in RGB format is input to the detection network and/or the template network in SiamRPN networks, corresponding convolution operations are further required to be performed on the images output by the detection network and the template network, while the convolution neural network in SiamRPN networks of related art is used for performing operations, the subsequent calculation amount is very large, and the load in the thrusters of some low-end smart chips (such as the haisi chips) is quite large, and the operation efficiency is also low.
Thus, to further increase the operational efficiency of the SiamRPN network, in some embodiments, the SiamRPN network includes a plurality of convolution layers, wherein at least one of the plurality of convolution layers is replaced with or connected to an ARM Neon optimized convolution layer.
In this embodiment, by replacing at least one convolution layer of the plurality of convolution layers in the SiamRPN network with an ARM Neon-optimized convolution layer or connecting with an ARM Neon-optimized convolution layer, the operation efficiency of the processor is improved, and the operation efficiency of the processor is far higher (about 10 times) than that of the thruster process of a low-end intelligent chip (such as a Hai Si chip).
The following describes and illustrates a post-ARM Neon optimized SiamRPN network in connection with the figures and embodiments.
As shown in fig. 2, the convolutional kernel of the deep convolutional layer is required to be provided in the conventional algorithm due to the Z network (template network). The template network needs to be updated with scene changes, which means that the weight of the deep convolution layer is not fixed, which is currently difficult to achieve for computer equipment to load only one model to complete the inference calculation.
With reference to fig. 2, in this embodiment, the optimization mode of ARM Neon is used to replace the conventional depthwise convolution calculation, so that the custom data block calculation can be realized, the characteristics of Neon multiplication calculation are fully utilized, the dot multiplication operation can be more effectively completed, and the calculation efficiency is greatly improved. Whereas in the conventional depthwise convolution calculation, which is the conventional matrix point multiplication calculation. For example, the convolution kernel size of the Z network (template network) output is 256×4×4×sizeof (float), and the data size of the X network (detection network) output is 256×20×20×sizeof (float), with a step size of 1. The traditional algorithm processes 256×4×4×256×20×sizeof (float) times multiply-add operation, which is very time-consuming and consumes much performance.
It should be noted that, in this embodiment, the Neon multiplication instruction has a blocking time of about 2 clock after the calculation result, and this time can complete the addition calculation without consuming time, which has improved performance compared with the conventional calculation. On the other hand, the custom data block calculation of Neon mainly uses a Neon register (128 bits) on an ARM to process the data block. For example, for a Z network, first, a row of 4 float data can be loaded into a Neon register at the same time, while the traditional calculation uses a general register (32 bits), and four times of loading are needed; second, there are multiple Neon registers (for example, but not limited to, 16 ARMv7 or 32 ARMv 8) on the ARM, so that multiple rows of data can be loaded at the same time in a self-defined manner, for example, 4 rows and 4 columns are loaded at a time, 256×4×4 float data are loaded at a time, and only 256 times of cyclic loading are needed by using the 4 Neon registers, which is not advantageous for the general purpose registers of the conventional algorithm, and 256×4×4 times of cyclic loading are actually performed. It is therefore apparent that the convolution layer calculation after optimization using ARM Neon is much faster than the calculation of the conventional algorithm.
It should be noted that ARM NEON is a 128-bit single instruction multiple data (single instruction multiple data, simply called SIMD) extension architecture suitable for ARM Cortex-A and Cortex-R52 processors.
The embodiments of the present application are described and illustrated below in terms of preferred embodiments.
In this embodiment, taking an example that the image processing network includes SiamRPN network, where the SiamRPN network includes a detection network and a template network, fig. 3 is a flow chart of a bitmap data processing method based on the SiamRPN network in the related art, as shown in fig. 3, in the related art, siamRPN network extracts bitmap data in an input frame, and the size of bitmap data extracted by the SiamRPN network may have a size that does not conform to the size of bitmap data required by the detection network and the template network, so that it is generally required to perform memory alignment on the bitmap data before inputting the bitmap data into the corresponding detection network and template network for use. In this process, the process of performing memory alignment on the bitmap data may result in a decrease in the subsequent operation efficiency of the SiamRPN network.
In the present application, as shown in fig. 4, a computer device encapsulates a preprocessing node as a node operator and adds the node operator to the front end of SiamRPN networks, where the preprocessing node is used to obtain first bitmap data; extracting from the first bitmap dataSecond bitmap data of pixel size and writing the second bitmap data to a pixel that is aligned by four bytesA memory space of a size, wherein,Representing an upward rounding; from the slaveAnd intercepting third bitmap data with the pixel size of MxM from the memory space with the size, and taking the third bitmap data as bitmap data input into an image processing network, wherein the pixel size of the bitmap data required to be input into the image processing network is MxM. By setting the preprocessing node, the pixel size M multiplied by M of bitmap data required to be input by a detection network and/or a template is directly output, so that memory alignment in the related art can be avoided, and the operation efficiency of SiamRPN networks is improved.
It should be noted that, when bitmap data is input into the template network, the image size of the bitmap data output by the preprocessing node in the embodiment of the present application may be 256×256; and when bitmap data is input into the detection network, the image size of the bitmap data output by the preprocessing node in the embodiment of the present application may be 128×128.
The embodiment also provides a bitmap data processing device of an image processing network, which is used for implementing the foregoing embodiments and preferred embodiments, and is not described in detail. As used below, the terms "module," "unit," "sub-unit," and the like may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
Fig. 5 is a block diagram of a configuration of a bitmap data processing apparatus of an image processing network according to an embodiment of the present application, as shown in fig. 5, including:
an obtaining module 51, configured to obtain first bitmap data;
an extraction module 52 coupled to the acquisition module 51 for extracting from the first bitmap data Second bitmap data of pixel size and writing the second bitmap data to a pixel that is aligned by four bytes A memory space of a size, wherein,Representing an upward rounding;
An interception module 53 coupled to the extraction module 52 for receiving the extracted data from the camera And intercepting third bitmap data with the pixel size of MxM from the memory space with the size, and taking the third bitmap data as bitmap data input into an image processing network, wherein the pixel size of the bitmap data required to be input into the image processing network is MxM.
In some of these embodiments, the extraction module 52 includes:
the first converting sub-module is used for converting the first bitmap data into bitmap data in an RGB format, wherein the first bitmap data is bitmap data in a YUV format;
A first extraction sub-module for extracting bitmap data in RGB format Second bitmap data of pixel size.
In some of these embodiments, the intercept module 53 includes:
a first interception sub-module for receiving the first data from the first data processing unit Intercepting third bitmap data of M multiplied by M pixel size in the memory space of the size;
And the second conversion sub-module is used for converting the third bitmap data into bitmap data in an RGB format and taking the bitmap data in the RGB format as bitmap data of an input image processing network.
In some of these embodiments, the image processing network comprises a SiamRPN network.
In some of these embodiments, the extraction module 52 further includes:
A second extracting sub-module for extracting the bitmap data of 128×128 pixel size from the first bitmap data and writing the bitmap data of 128×128 pixel size into a memory space of 128×128 byte size aligned according to four bytes;
And the second interception sub-module is used for intercepting bitmap data with 127 multiplied by 127 pixel size from a memory space with the size of 128 multiplied by 128 bytes, and taking the bitmap data with the size of 127 multiplied by 127 pixel as bitmap data of an input template network.
In some of these embodiments, the extraction module 52 further includes:
A third extraction sub-module for extracting bitmap data of 256×256 pixels from the first bitmap data, and writing the bitmap data of 256×256 pixels into a memory space of 256×256 bytes aligned according to four bytes;
And the third interception sub-module is used for intercepting bitmap data with 255×255 pixels from a memory space with the size of 256×256 bytes and taking the bitmap data with the size of 255×255 pixels as bitmap data of the input detection network.
In some of these embodiments, the image processing network comprises a SiamRPN network; the SiamRPN network includes a plurality of convolutional layers, wherein at least one of the plurality of convolutional layers is replaced with or connected to a convolutional layer optimized by ARM Neon.
The above-described respective modules may be functional modules or program modules, and may be implemented by software or hardware. For modules implemented in hardware, the various modules described above may be located in the same processor; or the above modules may be located in different processors in any combination.
In addition, the bitmap data processing method of the image processing network according to the embodiment of the present application described in connection with fig. 1 may be implemented by a computer device. Fig. 6 is a schematic diagram of a hardware configuration of a bitmap data processing apparatus of an image processing network according to an embodiment of the present application.
The computer device may include a processor 61 and a memory 62 storing computer program instructions.
In particular, the processor 61 may include a Central Processing Unit (CPU), or an Application SPECIFIC INTEGRATED Circuit (ASIC), or may be configured as one or more integrated circuits that implement embodiments of the present application.
Memory 62 may include, among other things, mass storage for data or instructions. By way of example, and not limitation, memory 62 may comprise a hard disk drive (HARD DISK DRIVE, abbreviated HDD), a floppy disk drive, a Solid state drive (Solid STATE DRIVE, abbreviated SSD), flash memory, an optical disk, a magneto-optical disk, a magnetic tape, or a universal serial bus (Universal Serial Bus, abbreviated USB) drive, or a combination of two or more of these. The memory 62 may include removable or non-removable (or fixed) media, where appropriate. The memory 62 may be internal or external to the data processing apparatus, where appropriate. In a particular embodiment, the memory 62 is a Non-Volatile (Non-Volatile) memory. In particular embodiments, memory 62 includes Read-Only Memory (ROM) and random access Memory (Random Access Memory, RAM). Where appropriate, the ROM may be a mask-programmed ROM, a programmable ROM (Programmable Read-Only Memory, abbreviated PROM), an erasable PROM (Erasable Programmable Read-Only Memory, abbreviated EPROM), an electrically erasable PROM (ELECTRICALLY ERASABLE PROGRAMMABLE READ-Only Memory, abbreviated EEPROM), an electrically rewritable ROM (ELECTRICALLY ALTERABLE READ-Only Memory, abbreviated EAROM), or a FLASH Memory (FLASH), or a combination of two or more of these. The RAM may be a Static Random-Access Memory (SRAM) or a dynamic Random-Access Memory (Dynamic Random Access Memory DRAM), where the DRAM may be a fast page mode dynamic Random-Access Memory (Fast Page Mode Dynamic Random Access Memory, FPMDRAM), an extended data output dynamic Random-Access Memory (Extended Date Out Dynamic Random Access Memory, EDODRAM), a synchronous dynamic Random-Access Memory (Synchronous Dynamic Random-Access Memory, SDRAM), or the like, as appropriate.
Memory 62 may be used to store or cache various data files that need to be processed and/or communicated, as well as possible computer program instructions for execution by processor 61.
The processor 61 implements the bitmap data processing method of any one of the image processing networks of the above-described embodiments by reading and executing the computer program instructions stored in the memory 62.
In some of these embodiments, the computer device may also include a communication interface 63 and a bus 60. As shown in fig. 6, the processor 61, the memory 62, and the communication interface 63 are connected to each other through the bus 60 and perform communication with each other.
The communication interface 63 is used to implement communications between various modules, devices, units, and/or units in embodiments of the application. Communication interface 63 may also enable communication with other components such as: and the external equipment, the image/data acquisition equipment, the database, the external storage, the image/data processing workstation and the like are used for data communication.
Bus 60 includes hardware, software, or both, that couple components of the computer device to one another. Bus 60 includes, but is not limited to, at least one of: data Bus (Data Bus), address Bus (Address Bus), control Bus (Control Bus), expansion Bus (Expansion Bus), local Bus (Local Bus). By way of example, and not limitation, bus 60 may include a graphics acceleration interface (ACCELERATED GRAPHICS Port, abbreviated as AGP) or other graphics Bus, an enhanced industry standard architecture (Extended Industry Standard Architecture, abbreviated as EISA) Bus, a Front Side Bus (Front Side Bus, abbreviated as FSB), a HyperTransport (abbreviated as HT) interconnect, an industry standard architecture (Industry Standard Architecture, abbreviated as ISA) Bus, a wireless bandwidth (InfiniBand) interconnect, a Low Pin Count (LPC) Bus, a memory Bus, a micro channel architecture (Micro Channel Architecture, abbreviated as MCA) Bus, a peripheral component interconnect (PERIPHERAL COMPONENT INTERCONNECT, abbreviated as PCI) Bus, a PCI-Express (PCI-X) Bus, a serial advanced technology attachment (SERIAL ADVANCED Technology Attachment, abbreviated as SATA) Bus, a video electronics standards Association local (Video Electronics Standards Association Local Bus, abbreviated as VLB) Bus, or other suitable Bus, or a combination of two or more of these. Bus 60 may include one or more buses, where appropriate. Although embodiments of the application have been described and illustrated with respect to a particular bus, the application contemplates any suitable bus or interconnect.
The computer device may execute the bitmap data processing method of the image processing network in the embodiment of the present application based on the acquired first bitmap data, thereby implementing the bitmap data processing method of the image processing network described in connection with fig. 1.
In addition, in connection with the bitmap data processing method of the image processing network in the above embodiment, an embodiment of the present application may be implemented by providing a computer readable storage medium. The computer readable storage medium has stored thereon computer program instructions; the computer program instructions, when executed by a processor, implement a bitmap data processing method of any of the image processing networks of the above embodiments.
The technical features of the above-described embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above-described embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples illustrate only a few embodiments of the application, which are described in detail and are not to be construed as limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of protection of the present application is to be determined by the appended claims.
Claims (10)
1. A bitmap data processing method of an image processing network, comprising:
acquiring first bitmap data;
extracting from the first bitmap data Second bitmap data of pixel size and writing the second bitmap data to a memory device aligned in four bytesA memory space of a size, wherein,Representing an upward rounding;
From the said And intercepting third bitmap data with the pixel size of MxM from the memory space with the size, and taking the third bitmap data as bitmap data input into an image processing network, wherein the pixel size of the bitmap data required to be input into the image processing network is MxM.
2. The bitmap data processing method of an image processing network according to claim 1, wherein the first bitmap data is bitmap data in YUV format; the extraction from the first bitmap dataThe second bitmap data of the pixel size includes:
Converting the first bitmap data into bitmap data in an RGB format;
extracting the bitmap data in the RGB format Second bitmap data of pixel size.
3. The bitmap data processing method of an image processing network according to claim 1, wherein the third bitmap data is bitmap data in YUV format; the step of using the third bitmap data as bitmap data of an input image processing network includes:
converting the third bitmap data into bitmap data in an RGB format, and taking the bitmap data in the RGB format as bitmap data of an input image processing network.
4. The bitmap data processing method of an image processing network according to claim 1, wherein the image processing network comprises a SiamRPN network.
5. The bitmap data processing method of an image processing network according to claim 4, wherein said SiamRPN network comprises a detection network; the extraction from the first bitmap dataSecond bitmap data of pixel size and writing the second bitmap data to a memory device aligned in four bytesThe memory space of the size includes:
Extracting bitmap data of 128×128 pixel size from the first bitmap data, and writing the bitmap data of 128×128 pixel size into a memory space of 128×128 byte size aligned by four bytes;
Said slave said Intercepting third bitmap data with the size of M multiplied by M pixels in a memory space with the size, and taking the third bitmap data as bitmap data of an input image processing network, wherein the third bitmap data comprises:
And intercepting bitmap data with the size of 127×127 pixels from the memory space with the size of 128×128 bytes, and taking the bitmap data with the size of 127×127 pixels as bitmap data of an input template network.
6. The bitmap data processing method of an image processing network according to claim 4, wherein said SiamRPN network comprises a detection network; extracting from the first bitmap dataSecond bitmap data of pixel size and writing the second bitmap data to a memory device aligned in four bytesThe memory space of the size includes:
extracting 256×256-pixel-sized bitmap data from the first bitmap data, and writing the 256×256-pixel-sized bitmap data into a 256×256-byte-sized memory space aligned in four bytes;
Said slave said Intercepting third bitmap data with the size of M multiplied by M pixels in a memory space with the size, and taking the third bitmap data as bitmap data of an input image processing network, wherein the third bitmap data comprises:
And intercepting bitmap data with 255×255 pixels from the memory space with 256×256 bytes, and taking the bitmap data with 255×255 pixels as bitmap data input into the detection network.
7. The bitmap data processing method of an image processing network according to claim 1, wherein the image processing network comprises a SiamRPN network; the SiamRPN network includes a plurality of convolutional layers, wherein at least one of the plurality of convolutional layers is replaced with or connected to an ARM Neon-optimized convolutional layer.
8. A bitmap data processing apparatus of an image processing network, comprising:
the acquisition module is used for acquiring the first bitmap data;
an extracting module for extracting the first bitmap data Second bitmap data of pixel size and writing the second bitmap data to a memory device aligned in four bytesA memory space of a size, wherein,Representing an upward rounding;
An intercepting module for intercepting the data from the And intercepting third bitmap data with the pixel size of MxM from the memory space with the size, and taking the third bitmap data as bitmap data input into an image processing network, wherein the pixel size of the bitmap data required to be input into the image processing network is MxM.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the bitmap data processing method of the image processing network according to any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the bitmap data processing method of the image processing network according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010595853.XA CN111861862B (en) | 2020-06-28 | 2020-06-28 | Bitmap data processing method and device of image processing network and computer equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010595853.XA CN111861862B (en) | 2020-06-28 | 2020-06-28 | Bitmap data processing method and device of image processing network and computer equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111861862A CN111861862A (en) | 2020-10-30 |
CN111861862B true CN111861862B (en) | 2024-07-26 |
Family
ID=72988161
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010595853.XA Active CN111861862B (en) | 2020-06-28 | 2020-06-28 | Bitmap data processing method and device of image processing network and computer equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111861862B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110598844A (en) * | 2019-08-06 | 2019-12-20 | 天津大学 | Parallel convolution neural network accelerator based on FPGA and acceleration method |
CN110737473A (en) * | 2019-09-24 | 2020-01-31 | 北京小米移动软件有限公司 | Data processing method and device, terminal and storage medium |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5717845A (en) * | 1994-12-13 | 1998-02-10 | Microsoft Corporation | Method and apparatus for transferring a brush pattern to a destination bitmap |
US6084600A (en) * | 1996-03-15 | 2000-07-04 | Micron Technology, Inc. | Method and apparatus for high-speed block transfer of compressed and word-aligned bitmaps |
EP1318665B1 (en) * | 2001-12-06 | 2015-02-25 | Canon Kabushiki Kaisha | Image processing apparatus and method, program, and storage medium |
CN103914852B (en) * | 2014-03-14 | 2018-03-30 | 兰州交通大学 | DICOM medical images kinematic nonlinearity based on CUDA adjusts window method |
US11861484B2 (en) * | 2018-09-28 | 2024-01-02 | Qualcomm Incorporated | Neural processing unit (NPU) direct memory access (NDMA) hardware pre-processing and post-processing |
-
2020
- 2020-06-28 CN CN202010595853.XA patent/CN111861862B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110598844A (en) * | 2019-08-06 | 2019-12-20 | 天津大学 | Parallel convolution neural network accelerator based on FPGA and acceleration method |
CN110737473A (en) * | 2019-09-24 | 2020-01-31 | 北京小米移动软件有限公司 | Data processing method and device, terminal and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN111861862A (en) | 2020-10-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10970830B2 (en) | Image style conversion method, apparatus and device | |
US11989638B2 (en) | Convolutional neural network accelerating device and method with input data conversion | |
CN112287257A (en) | Page display method and device, electronic equipment and storage medium | |
CN110414520A (en) | Universal character recognition methods, device, computer equipment and storage medium | |
WO2022166258A1 (en) | Behavior recognition method and apparatus, terminal device, and computer-readable storage medium | |
CN116188808B (en) | Image feature extraction method and system, storage medium and electronic device | |
US10600148B2 (en) | System and method for mapped splicing of a three-dimensional look-up table for image format conversion | |
CN110717864B (en) | Image enhancement method, device, terminal equipment and computer readable medium | |
CN116385402A (en) | A battery defect detection method and system based on image deep learning | |
CN111861862B (en) | Bitmap data processing method and device of image processing network and computer equipment | |
WO2020097802A1 (en) | Image colour correction method and device, and storage medium | |
CN113485750B (en) | Data processing method and data processing device | |
CN110018851A (en) | Data processing method, relevant device and computer-readable medium | |
WO2023010755A1 (en) | Hdr video conversion method and apparatus, and device and computer storage medium | |
WO2021174834A1 (en) | Yuv image recognition method and system, and computer device | |
CN104243889A (en) | Method for matching digital video signal transmission format of infrared thermal imaging machine core | |
US20220004857A1 (en) | Neural network processing apparatus, neural network processing method, and neural network processing program | |
CN113034357B (en) | Method and system for converting RAW format file, electronic device and storage medium | |
CN110930290B (en) | Data processing method and device | |
CN109375952B (en) | Method and apparatus for storing data | |
CN113780286A (en) | Object recognition method and device, storage medium and electronic device | |
CN112308787B (en) | Distortion correction method and device and electronic equipment | |
CN116824129A (en) | Portrait matting method, device, equipment and storage medium | |
CN114677464A (en) | Image processing method, image processing device, computer equipment and storage medium | |
CN113409196A (en) | High-speed global chromatic aberration correction method for real-time video splicing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |