Background
With the continuous progress and development of digital technology, people's daily life is profoundly changed. The novel technologies such as mobile payment, single-vehicle sharing, unmanned retail and the like are inseparable from our lives, and the shopping and the traveling of people are greatly facilitated. Wherein the success of these emerging technologies is indistinguishable from the popularity and application of two-dimensional codes.
Two-dimensional codes, also known as two-dimensional barcodes, use black and white pixels to represent 0 and 1. The two-dimensional Code most common in life is a QR Code (QRCode). The two-dimensional code can be successfully applied to various occasions, and is mainly attributed to the fact that the two-dimensional code has extremely strong robustness and sufficient information capacity. The sufficient information capacity allows the two-dimensional code to have a sufficient error correction function. This makes the two-dimensional code a fast, efficient and robust information carrier.
However, some specific application scenarios do not depend on high storage and high redundancy of the two-dimensional code, and thus the high redundancy of the information of the two-dimensional code becomes a disadvantage thereof. For example, in an auxiliary visual impairment project based on a two-dimensional code and a mobile phone, the two-dimensional code is used as positioning information and category information and pasted on a common tool, and the acquisition requirement of visual impairment personnel is met. For another example, in patent publication No. CN102735235B, the robot performs indoor navigation using a QR two-dimensional code attached to a location point.
These scenarios have common features: two-dimensional codes are used for positioning or classification, and only limited categories are available in the system, so that high-capacity storage of the two-dimensional codes is not required. Therefore, the direct application of the standard QR code introduces the problem of not robust positioning, and the decoding is also erroneous due to the redundancy of information.
If the two-dimensional code pixels in the image are too low and the target is too small, the two-dimensional code cannot be correctly positioned and decoded by a decoder. A black-and-white pixel block embodied as a two-dimensional code is susceptible to image quality as a minimum unit and cannot be decoded.
Although there are many patents on two-dimensional code positioning and decoding, such as the positioning method with the publication number CN106485183A, the decoding method with the publication number CN105138940A, etc., these methods cannot fundamentally solve the above-mentioned problems.
Therefore, a new two-dimensional pattern recognition method is needed, which can effectively reduce the information redundancy of the two-dimensional code and improve the robustness under low pixels.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a two-dimensional texture code and a coding and decoding method thereof, which can effectively reduce information redundancy and improve robustness under low pixels.
According to an aspect of the present invention, there is provided a two-dimensional texture code, including: the two-dimensional texture code decoding device comprises an L-shaped format correction area and a rectangular texture data area, wherein the format correction area is used for correcting the format of the two-dimensional texture code, the rectangular texture data area is divided into m rectangular sub-areas, each sub-area is occupied by one texture picture, n different types of texture pictures are shared, each type of texture picture corresponds to one decoding character, and n is greater than m.
Preferably, m is 4, and the texture data region is divided into 4 rectangular sub-regions.
Preferably, n is 10, there are 10 different types of texture pictures.
Preferably, the format correction region is a full black L shape surrounding the outside of both sides of the rectangular texture data region.
Preferably, the format correction area is used for correcting the format of the two-dimensional texture code and correcting the two-dimensional texture code to a standard direction and shape.
Preferably, the single texture picture is 200 pixels, and the width of the format correction region is 25 pixels.
According to another aspect of the present invention, there is also provided a two-dimensional texture code encoding method, which is applicable to the two-dimensional texture code, and includes the following steps:
determining a number to be encoded;
and filling the number to be coded into each sub-region according to the texture picture type corresponding to the decoded character.
Preferably, the method further comprises the following steps: determining the m and n according to the number to be coded, wherein n ^ m is larger than the number to be coded.
According to another aspect of the present invention, there is also provided a two-dimensional texture code decoding method, which is applicable to the two-dimensional texture code, and includes the following steps:
positioning the two-dimensional texture code;
detecting the L-shaped format correction area, and correcting the two-dimensional texture code to a standard direction by using the L-shaped format correction area;
classifying texture pictures of each sub-region of the two-dimensional texture code after correction, and obtaining decoding results according to decoding characters corresponding to the types of the texture pictures respectively.
Preferably, the two-dimensional texture code is located using a method such as DenseNet, FPN, or fasternn.
Compared with the prior art, the invention has the following beneficial effects:
the two-dimensional texture code and the coding and decoding methods thereof can effectively reduce the information redundancy of the two-dimensional code and improve the decoding robustness under low pixels.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that variations and modifications can be made by persons skilled in the art without departing from the spirit of the invention. All falling within the scope of the present invention.
Fig. 1 is a two-dimensional texture code according to an embodiment of the invention, and as shown in fig. 1, the two-dimensional texture code according to an embodiment of the invention is composed of two parts, one part is an L-shaped format correction area for correcting the format of the two-dimensional texture code, and the other part is a texture data area.
The texture data area is a rectangular area and is divided into m rectangular sub-areas.
In a preferred embodiment of the present invention, as shown in fig. 1, m is 4, that is, the texture data area is divided into 4 rectangular sub-areas.
Each sub-region is occupied by a texture picture, and the sub-regions have n types of textures, each type of texture corresponds to a decoding character, and n > m.
In a preferred embodiment of the invention, n-10, i.e. there are 10 different types of textures.
Fig. 2 shows 10 types of textures and their corresponding decoded characters according to an embodiment of the present invention, and as shown in fig. 2, the 10 different types of textures of the present embodiment correspond to decoded characters 1 to 10, respectively.
In an embodiment, the L-shaped format correction area is completely black, and is used for correcting the format of the two-dimensional texture code, and correcting the two-dimensional texture code to a standard direction and shape.
If there is no format correction area, the direction of the texture data area can be arbitrarily rotated, and the sequence of each sub-area cannot be positioned, so that the decoding result is not unique.
Fig. 3 is a schematic diagram of format correction according to an embodiment of the present invention, as shown in fig. 3, in this embodiment, a texture data region may be located by format correction, wherein a sub-region 3 is defined by a right angle of an L shape, so as to determine the remaining sub-regions 1, 2, and 4. By utilizing the positioning point information of the L-shaped format correction area, the two-dimensional texture code can be corrected to the standard direction and shape by adopting perspective transformation.
For any two-dimensional texture code, after the format correction area is corrected, a four-digit decimal result can be obtained by decoding through a decoder, and the decoding result of the two-dimensional texture code is 8694 as shown in fig. 1.
In other embodiments of the present invention, other correction methods may also be used to correct the two-dimensional texture code to a standard direction and shape, which is not limited by the present invention.
In the present embodiment, the texture data area is divided into 4 sub-areas, and 10 texture patterns are used, but in other embodiments of the present invention, the number of sub-areas and the number of types of texture patterns are not limited, and the texture patterns are not fixed and can be adjusted or reselected according to actual situations.
It can be calculated that the data capacity of the two-dimensional texture code of the present embodiment, which includes 4 sub-regions and 10 texture patterns, is 10^ 4.
In practical cases, if the data capacity needs to be enlarged or reduced, the number of the sub-regions or the type of the texture pattern needs to be increased or decreased accordingly. The specific data capacity can be obtained by the following formula:
c=n^m (1)
where n is the number of texture picture types and m is the number of sub-regions in the texture data region.
To achieve better results, we need to balance the proportional size of the texture data region and the format correction region. If the proportion of the correction area is too large, the decoding of the texture is influenced; if the size of the format correction area is too small, correction is not easy.
In a preferred embodiment, the pixels of the single texture picture are set to 200 pixels, and the width of the format correction region is set to 25 pixels, so that the requirements of decoding and correction can be met simultaneously.
Compared with the prior art, the method can effectively reduce the information redundancy of the two-dimensional code, improves the robustness under low pixels, and is suitable for application scenes which only have limited categories and do not need high-capacity storage in a system.
The invention further provides a two-dimensional texture code encoding method, which is used for the two-dimensional texture code, and in one embodiment of the invention, the two-dimensional texture code encoding method comprises the following steps: determining a number to be encoded; and filling the numbers to be coded into each sub-area according to the texture picture type corresponding to the decoded character.
In this embodiment, the encoding portion of the patent requires the input of four digits to be encoded, ranging from 0000 to 9999 for a total of ten thousand. And according to the input number to be coded, the texture picture passes through the format and forms a final two-dimensional texture code together with the format correction area.
In another embodiment, a process of determining a data capacity may also be included: determining the m and n according to the number to be encoded, wherein n ^ m is larger than the number to be encoded.
The present invention further provides a two-dimensional texture code decoding method, which is used for the two-dimensional texture code, and fig. 4 is a flowchart of the two-dimensional texture code decoding method according to an embodiment of the present invention, and as shown in fig. 4, the method includes the following steps:
and S01, positioning the two-dimensional texture code.
In one embodiment, existing object location techniques, such as DenseNet, FPN, FasterRCNN methods, etc., may be employed for location.
The texture code of the embodiment of the invention has obvious texture information and is easier to position compared with the traditional two-dimensional code.
And S02, detecting the L-shaped format correction area, and correcting the two-dimensional texture code to a standard direction and a standard shape by using the L-shaped format correction area.
In one embodiment, the two-dimensional texture code can be corrected to the standard direction and shape by using a perspective transformation and using positioning point information of the L-shaped correctors by using Hough transformation and threshold detection.
And S03, classifying the texture pictures of each sub-region of the two-dimensional texture code after correction, and obtaining decoding results according to decoding characters corresponding to the types of the texture pictures respectively.
In an embodiment, the texture picture of the two-dimensional texture code after rectification can be classified and decoded by using a deep learning technique.
Specifically, the two-dimensional texture codes of the classifier team which is subjected to deep classification learning can be adopted for classification in the decoding process, and the accuracy and the adaptability of classification can be improved.
In a preferred embodiment, in order to keep the classifier highly accurate and adaptable, the training data may be enhanced during the training process, introducing situations that may be encountered in real situations. Since it is not applicable in practical environment if only the original texture picture is used for training. When the two-dimensional texture code is printed and pasted in a real environment, the two-dimensional texture code can be influenced by nonuniform care, JPEG (joint photographic experts group) compression, geometric distortion and the like. These effects tend to affect the accuracy of deep learning based classifiers.
Fig. 5 is a process of performing deep learning by the texture image classifier according to an embodiment of the invention, and as shown in fig. 5, the deep learning includes the following steps: generating a random number; encoding the random number; generating a two-dimensional texture code; performing data enhancement on the two-dimensional texture code; generating a decoding result; and comparing the decoding result with the random number, and training the classifier.
Compared with the existing two-dimensional code encoding and decoding methods, the embodiment of the invention can effectively reduce the information redundancy of the two-dimensional code, can rapidly and simply encode, and can improve the decoding robustness under low-pixel and complex environments.
The technical features in the embodiments described above may be operated with reference to the embodiments described above, or may be arbitrarily combined according to actual needs, and for brevity of description, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as the scope of the present description as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.