[go: up one dir, main page]

CN112905977A - Verification code generation method based on image style conversion - Google Patents

Verification code generation method based on image style conversion Download PDF

Info

Publication number
CN112905977A
CN112905977A CN202011327160.9A CN202011327160A CN112905977A CN 112905977 A CN112905977 A CN 112905977A CN 202011327160 A CN202011327160 A CN 202011327160A CN 112905977 A CN112905977 A CN 112905977A
Authority
CN
China
Prior art keywords
image
style
content
verification code
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011327160.9A
Other languages
Chinese (zh)
Inventor
刘晓
曹雄港
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University
Original Assignee
Chongqing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University filed Critical Chongqing University
Priority to CN202011327160.9A priority Critical patent/CN112905977A/en
Publication of CN112905977A publication Critical patent/CN112905977A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/33User authentication using certificates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2221/00Indexing scheme relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/21Indexing scheme relating to G06F21/00 and subgroups addressing additional information or applications relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/2133Verifying human interaction, e.g., Captcha

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

本发明公开了一种基于图像风格转换的验证码生成方法,包括如下步骤:步骤S1:通过RMSProp算法对VGG模型进行训练,得到的VGG‑19网络模型;步骤S2:将内容图像送入已经训练好的VGG‑19网络模型,即可得到内容图像在各层上的特征图输出;步骤S3:将风格图像同样送入VGG‑19网络模型,得到风格图像在各层上的特征图输出;步骤S4:将内容图像在各层上的特征图输出和风格图像在各层上的特征图输出送入已经训练好的VGG‑19网络模型,通过图像验证码生成算法得到验证码。本发明结构新颖,构思巧妙,生成的图像验证码从识别率和识别效率上都可以证明其拥有对人类友好却对机器不友好的基本性质,能够很好的起到保护网站的作用。

Figure 202011327160

The invention discloses a verification code generation method based on image style conversion, comprising the following steps: step S1: training a VGG model through the RMSProp algorithm to obtain a VGG-19 network model; step S2: sending the content image into the trained model After obtaining a good VGG-19 network model, the feature map output of the content image on each layer can be obtained; Step S3: the style image is also sent to the VGG-19 network model to obtain the feature map output of the style image on each layer; step S4: Send the feature map output of the content image on each layer and the feature map output of the style image on each layer into the trained VGG‑19 network model, and obtain the verification code through the image verification code generation algorithm. The invention has novel structure and ingenious conception, and the generated image verification code can prove that it has the basic properties of being friendly to humans but unfriendly to machines in terms of recognition rate and efficiency, and can well protect websites.

Figure 202011327160

Description

Verification code generation method based on image style conversion
Technical Field
The invention relates to the technical field of verification code generation, in particular to a verification code generation method based on image style conversion.
Background
The internet brings much convenience to the life of people, but brings much harm to the life of people. Network attacks operated by computer automatic programs are ubiquitous and time-free, such as batch registration and login, large-scale library collision, malicious voting, irrigation and the like, not only can cause great harm to computer networks, but also can cause threats to information security of people, and the technologies of the verification codes can better prevent the threats.
The types of verification codes are various, such as short message verification codes which often appear during dynamic login, image identification verification codes during website login, verification codes which click all images of the same category when the website is logged in 12306, and a series of verification codes which are strange with one another, such as sliding puzzle blocks, clicking Chinese characters, calculating questions and the like. The verification code distinguishes whether the operator is a user or a machine through the identification and input operation of the operator, and can prevent the illegal operation of the machine to a great extent to achieve the aim of protecting the website.
The type of the verification codes on a plurality of websites at present is an image type verification code, and the pass entering the website is obtained by identifying characters and inputting the characters, but the problem is that the image verification codes passed by a plurality of websites at present are low in level, although human eyes can easily identify the pass, the identification rate of a machine at present is high, and thus the function of the verification codes is greatly weakened. The low-level image-based verification code is no different from the open door to an attacker. Therefore, it is necessary to design a verification code generation method based on image style conversion.
Disclosure of Invention
In view of the above situation, in order to overcome the defects of the prior art, the verification code generation method based on image style conversion is provided, the verification code generation method is novel in structure and ingenious in concept, and the generated image verification code can be proved to have the basic properties of being friendly to human but not friendly to machine from the recognition rate and the recognition efficiency, so that the website can be well protected.
In order to achieve the purpose, the invention provides the following technical scheme: a verification code generation method based on image style conversion comprises the following steps:
step S1: training the VGG model through an RMSProp algorithm to obtain a VGG-19 network model;
step S2: sending the content image into the trained VGG-19 network model to obtain the characteristic diagram output of the content image on each layer;
step S3: sending the style image into a VGG-19 network model to obtain characteristic diagrams of the style image on each layer and outputting the characteristic diagrams;
step S4: and (4) outputting the feature maps of the content images on all layers and the feature maps of the style images on all layers into a trained VGG-19 network model, and obtaining the verification codes through an image verification code generation algorithm.
Preferably, the RMSProp algorithm performs an exclusive-weight addition of the previous gradient square accumulation and the sub-gradient square, and updates the expression of the model parameters as:
Figure BDA0002792597370000021
wherein eta is0Denotes the initial learning rate, WtThe parameters of the model representing the time t,
Figure BDA0002792597370000022
the time loss function J (W) is expressed in terms of the gradient of W, and a small number epsilon, E [ g ] is set to avoid the denominator being 02]tAnd expressing the weighted sum of the gradient squares of the first t times, wherein alpha expresses power, the value in the most important root expression carries out exclusive weight addition on the previous gradient square accumulation and the gradient square of the time, then the sum is squared, and finally the initial learning rate is divided by the root output value, and the value is the updated learning rate.
Preferably, in step S2, the content image is represented by a Gram matrix, and the steps are as follows:
extracting feature maps of each layer of a style image and a target image (initially a noise image) by using a VGG-19 network model;
respectively converting the characteristic diagrams of a certain layer of the two images into Gram matrixes, and calculating the difference between the two Gram matrixes;
and thirdly, operating the characteristic diagrams of each layer of the two images, then adding the differences of each layer to obtain a sum, and continuously adjusting the target image in the calculation process to enable the target image to be close to the content image and the style image simultaneously.
Preferably, the expression of the Gram matrix is as follows:
Figure BDA0002792597370000031
Figure BDA0002792597370000032
representing image features.
Preferably, the expression of the image verification code generation algorithm is as follows:
Figure BDA0002792597370000033
in the form of an original image of the content,
Figure BDA0002792597370000034
in the form of an original stylistic image,
Figure BDA0002792597370000035
for the image to be generated (initially a noisy image), to be trained
Figure BDA0002792597370000036
Existing
Figure BDA0002792597370000037
In addition, have
Figure BDA0002792597370000038
Can then be combined
Figure BDA0002792597370000039
Content loss and
Figure BDA00027925973700000310
the final loss function may be represented by the following formula:
Figure BDA00027925973700000311
where α and β are hyper-parameters that balance the two losses, if α is too large, the restored image will be closer
Figure BDA00027925973700000312
The style of (1); if β is larger, then closer
Figure BDA00027925973700000313
The style of (1);
after training begins, a noise image is initialized randomly
Figure BDA00027925973700000314
This is an image object that is to be continuously optimized, and will be
Figure BDA00027925973700000315
Simultaneously input into the network to extract content images
Figure BDA00027925973700000316
Extracting a style image from the features of a certain layer
Figure BDA00027925973700000317
Features on multiple layers are given weight and sum through LcontentAnd LstyleDifferent weights of (2) calculate the total loss LtotalThen according to gradient descent algorithm, adjust
Figure BDA00027925973700000318
To reduce LtotalIterating the target until the iteration is finished and outputting
Figure BDA00027925973700000319
At this time
Figure BDA00027925973700000320
The method has the advantages of content and style and only needs to store.
Preferably, the gradient descent algorithm sets W as a parameter of the model, and thus it is an object that needs to be trained continuously. Setting J (W) as a loss function, eta as a learning rate,
Figure BDA0002792597370000041
for the partial derivative of the loss function J (W) with respect to the parameter W, the standard gradient descent algorithm updates the model parameters by the following expression:
Figure BDA0002792597370000042
wherein, WtRepresenting the model parameters at time t.
Preferably, the step S4 further includes an optimization algorithm:
a denotes an array storing images of a plurality of genres,
Figure BDA0002792597370000043
Figure BDA0002792597370000044
representing an original content image;
Figure BDA0002792597370000045
representing an original nth style image;
Figure BDA0002792597370000046
representing an image to be generated;
Figure BDA0002792597370000047
representing content images
Figure BDA0002792597370000048
And a target image
Figure BDA0002792597370000049
A loss function of (d);
Figure BDA00027925973700000410
image generation under characteristic diagram of m-th layer
Figure BDA00027925973700000411
With the nth style image
Figure BDA00027925973700000412
Then the total loss with the plurality of stylistic images under the given mth layer profile can be expressed as:
Figure BDA00027925973700000413
since the style image is a weighted average of five layers of feature maps, the overall style loss function of the five layers of feature maps can be expressed as:
Figure BDA00027925973700000414
next, the content loss and the various style losses are combined to generate an image, and the final loss function can be expressed as:
Figure BDA00027925973700000415
where n is the length of array A, 1/n is the sum of all stylized images and the generated image
Figure BDA00027925973700000416
Style loss L oflayer-totalAn averaging process is performed to make LcontentAnd Llayer-totalIn the same dimension. And alpha and beta are two in balanceIf alpha is larger, the target image is closer to the original content image
Figure BDA0002792597370000051
The style of (1); if beta is larger, the style is closer to the style of a mixture of a plurality of styles.
The invention has the beneficial effects that:
the generated image verification code can be proved to have the basic properties of being friendly to human but not friendly to machine from the aspects of identification rate and identification efficiency, and can well play a role in protecting websites.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a flow chart of a verification code generation method based on image style conversion according to the present invention.
Detailed Description
The following describes the present invention in further detail with reference to fig. 1.
As shown in fig. 1, the present invention provides the following technical solutions: a verification code generation method based on image style conversion comprises the following steps:
step S1: training the VGG model through an RMSProp algorithm to obtain a VGG-19 network model;
step S2: sending the content image into the trained VGG-19 network model to obtain the characteristic diagram output of the content image on each layer;
step S3: sending the style image into a VGG-19 network model to obtain characteristic diagrams of the style image on each layer and outputting the characteristic diagrams;
step S4: and (4) outputting the feature maps of the content images on all layers and the feature maps of the style images on all layers into a trained VGG-19 network model, and obtaining the verification codes through an image verification code generation algorithm.
Preferably, the RMSProp algorithm is an exclusive-or addition of the previous gradient squared sum and the sub-gradient squared, which updates the model parameters by the expression:
Figure BDA0002792597370000061
wherein eta is0Denotes the initial learning rate, WtThe parameters of the model representing the time t,
Figure BDA0002792597370000062
the time loss function J (W) is expressed in terms of the gradient of W, and a small number epsilon, E [ g ] is set to avoid the denominator being 02]tAnd expressing the weighted sum of the gradient squares of the first t times, wherein alpha expresses power, the value in the most important root expression carries out exclusive weight addition on the previous gradient square accumulation and the gradient square of the time, then the sum is squared, and finally the initial learning rate is divided by the root output value, and the value is the updated learning rate.
Preferably, the content image is represented by a Gram matrix in step S2, which includes the following steps:
extracting feature maps of each layer of a style image and a target image (initially a noise image) by using a VGG-19 network model;
respectively converting the characteristic diagrams of a certain layer of the two images into Gram matrixes, and calculating the difference between the two Gram matrixes;
and thirdly, operating the characteristic diagrams of each layer of the two images, then adding the differences of each layer to obtain a sum, and continuously adjusting the target image in the calculation process to enable the target image to be close to the content image and the style image simultaneously.
Preferably, the expression of the Gram matrix is as follows:
Figure BDA0002792597370000063
Figure BDA0002792597370000064
representing imagesAnd (5) characterizing.
Preferably, the expression of the image verification code generation algorithm is as follows:
Figure BDA0002792597370000065
in the form of an original image of the content,
Figure BDA0002792597370000066
in the form of an original stylistic image,
Figure BDA0002792597370000067
for the image to be generated (initially a noisy image), to be trained
Figure BDA0002792597370000071
Existing
Figure BDA0002792597370000072
In addition, have
Figure BDA0002792597370000073
Can then be combined
Figure BDA0002792597370000074
Content loss and
Figure BDA0002792597370000075
the final loss function may be represented by the following formula:
Figure BDA0002792597370000076
where α and β are hyper-parameters that balance the two losses, if α is too large, the restored image will be closer
Figure BDA0002792597370000077
The style of (1); if β is larger, then closer
Figure BDA0002792597370000078
The style of (1);
after training begins, a noise image is initialized randomly
Figure BDA0002792597370000079
This is an image object that is to be continuously optimized, and will be
Figure BDA00027925973700000710
Simultaneously input into the network to extract content images
Figure BDA00027925973700000711
Extracting a style image from the features of a certain layer
Figure BDA00027925973700000712
Features on multiple layers are given weight and sum through LcontentAnd LstyleDifferent weights of (2) calculate the total loss LtotalThen according to gradient descent algorithm, adjust
Figure BDA00027925973700000713
To reduce LtotalIterating the target until the iteration is finished and outputting
Figure BDA00027925973700000714
At this time
Figure BDA00027925973700000715
The method has the advantages of content and style and only needs to store.
Preferably, the gradient descent algorithm sets W as a parameter of the model, and thus it is an object that needs to be trained continuously. Setting J (W) as a loss function, eta as a learning rate,
Figure BDA00027925973700000716
for the partial derivative of the loss function J (W) with respect to the parameter W, the standard gradient descent algorithm updates the model parameters by the following expression:
Figure BDA00027925973700000717
wherein, WtRepresenting the model parameters at time t.
Preferably, step S4 further includes an optimization algorithm:
a denotes an array storing images of a plurality of genres,
Figure BDA00027925973700000718
Figure BDA00027925973700000719
representing an original content image;
Figure BDA00027925973700000720
representing an original nth style image;
Figure BDA00027925973700000721
representing an image to be generated;
Figure BDA00027925973700000722
representing content images
Figure BDA00027925973700000723
And a target image
Figure BDA00027925973700000724
A loss function of (d);
Figure BDA0002792597370000081
image generation under characteristic diagram of m-th layer
Figure BDA0002792597370000082
With the nth style image
Figure BDA0002792597370000083
Damage ofA loss function, then the total loss with the plurality of stylistic images under the given mth layer profile can be expressed as:
Figure BDA0002792597370000084
since the style image is a weighted average of five layers of feature maps, the overall style loss function of the five layers of feature maps can be expressed as:
Figure BDA0002792597370000085
next, the content loss and the various style losses are combined to generate an image, and the final loss function can be expressed as:
Figure BDA0002792597370000086
where n is the length of array A, 1/n is the sum of all stylized images and the generated image
Figure BDA0002792597370000087
Style loss L oflayer-totalAn averaging process is performed to make LcontentAnd Llayer-totalIn the same dimension. Alpha and beta are hyper-parameters for balancing two losses, if alpha is larger, the target image is closer to the original content image
Figure BDA0002792597370000088
The style of (1); if beta is larger, the style is closer to the style of a mixture of a plurality of styles.
The invention has the beneficial effects that: the generated image verification code can be proved to have the basic properties of being friendly to human but not friendly to machine from the aspects of identification rate and identification efficiency, and can well play a role in protecting websites.
Finally, it should be noted that: although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that changes may be made in the embodiments and/or equivalents thereof without departing from the spirit and scope of the invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (7)

1. A verification code generation method based on image style conversion is characterized by comprising the following steps: the method comprises the following steps:
step S1: training the VGG model through an RMSProp algorithm to obtain a VGG-19 network model;
step S2: sending the content image into the trained VGG-19 network model to obtain the characteristic diagram output of the content image on each layer;
step S3: sending the style image into a VGG-19 network model to obtain characteristic diagrams of the style image on each layer and outputting the characteristic diagrams;
step S4: and (4) outputting the feature maps of the content images on all layers and the feature maps of the style images on all layers into a trained VGG-19 network model, and obtaining the verification codes through an image verification code generation algorithm.
2. The verification code generation method based on image style conversion according to claim 1, characterized in that: the RMSProp algorithm is an exclusive-weight addition of the previous gradient squared accumulation and the sub-gradient squared, which updates the expression of the model parameters as:
Figure FDA0002792597360000011
wherein eta is0Denotes the initial learning rate, WtThe parameters of the model representing the time t,
Figure FDA0002792597360000012
indicating the loss-at-time function J (W) with respect toThe magnitude of the gradient of W is set to a very small number E, E g, in order to avoid a denominator of 02]tAnd expressing the weighted sum of the gradient squares of the first t times, wherein alpha expresses power, the value in the most important root expression carries out exclusive weight addition on the previous gradient square accumulation and the gradient square of the time, then the sum is squared, and finally the initial learning rate is divided by the root output value, and the value is the updated learning rate.
3. The verification code generation method based on image style conversion according to claim 1, characterized in that: in step S2, the content image is represented by a Gram matrix, which includes the following steps:
extracting feature maps of each layer of a style image and a target image (initially a noise image) by using a VGG-19 network model;
respectively converting the characteristic diagrams of a certain layer of the two images into Gram matrixes, and calculating the difference between the two Gram matrixes;
and thirdly, operating the characteristic diagrams of each layer of the two images, then adding the differences of each layer to obtain a sum, and continuously adjusting the target image in the calculation process to enable the target image to be close to the content image and the style image simultaneously.
4. The verification code generation method based on image style conversion according to claim 3, characterized in that: the expression of the Gram matrix is as follows:
Figure FDA0002792597360000021
Figure FDA0002792597360000022
representing image features.
5. The verification code generation method based on image style conversion according to claim 1, characterized in that: the expression of the image verification code generation algorithm is as follows:
Figure FDA0002792597360000023
in the form of an original image of the content,
Figure FDA0002792597360000024
in the form of an original stylistic image,
Figure FDA0002792597360000025
for the image to be generated (initially a noisy image), to be trained
Figure FDA0002792597360000026
Existing
Figure FDA0002792597360000027
In addition, have
Figure FDA0002792597360000028
Can then be combined
Figure FDA0002792597360000029
Content loss and
Figure FDA00027925973600000210
the final loss function may be represented by the following formula:
Figure FDA00027925973600000211
where α and β are hyper-parameters that balance the two losses, if α is too large, the restored image will be closer
Figure FDA00027925973600000212
The style of (1); if β is larger, then closer
Figure FDA00027925973600000213
The style of (1);
after training begins, a noise image is initialized randomly
Figure FDA00027925973600000214
This is an image object that is to be continuously optimized, and will be
Figure FDA00027925973600000215
Simultaneously input into the network to extract content images
Figure FDA00027925973600000216
Extracting a style image from the features of a certain layer
Figure FDA0002792597360000031
Features on multiple layers are given weight and sum through LcontentAnd LstyleDifferent weights of (2) calculate the total loss LtotalThen according to gradient descent algorithm, adjust
Figure FDA0002792597360000032
To reduce LtotalIterating the target until the iteration is finished and outputting
Figure FDA0002792597360000033
At this time
Figure FDA0002792597360000034
The method has the advantages of content and style and only needs to store.
6. The verification code generation method based on image style conversion according to claim 5, characterized in that: the gradient descent algorithm sets W as a parameter of the model, and then the gradient descent algorithm is an object needing continuous training. Setting J (W) as a loss function, eta as a learning rate,
Figure FDA0002792597360000035
for the partial derivative of the loss function J (W) with respect to the parameter W, the standard gradient descent algorithm updates the model parameters by the following expression:
Figure FDA0002792597360000036
wherein, WtRepresenting the model parameters at time t.
7. The verification code generation method based on image style conversion according to claim 1, characterized in that: the step S4 further includes an optimization algorithm:
a denotes an array storing images of a plurality of genres,
Figure FDA0002792597360000037
Figure FDA0002792597360000038
representing an original content image;
Figure FDA0002792597360000039
representing an original nth style image;
Figure FDA00027925973600000310
representing an image to be generated;
Figure FDA00027925973600000311
representing content images
Figure FDA00027925973600000312
And a target image
Figure FDA00027925973600000313
A loss function of (d);
Figure FDA00027925973600000314
image generation under characteristic diagram of m-th layer
Figure FDA00027925973600000315
With the nth style image
Figure FDA00027925973600000316
Then the total loss with the plurality of stylistic images under the given mth layer profile can be expressed as:
Figure FDA00027925973600000317
since the style image is a weighted average of five layers of feature maps, the overall style loss function of the five layers of feature maps can be expressed as:
Figure FDA0002792597360000041
next, the content loss and the various style losses are combined to generate an image, and the final loss function can be expressed as:
Figure FDA0002792597360000042
where n is the length of array A, 1/n is the sum of all stylized images and the generated image
Figure FDA0002792597360000043
Style loss L oflayer-totalAn averaging process is performed to make LcontentAnd Llayer-totalIn the same dimension. Alpha and beta are hyper-parameters for balancing two losses, if alpha is larger, the target image will beCloser to the original content image
Figure FDA0002792597360000044
The style of (1); if beta is larger, the style is closer to the style of a mixture of a plurality of styles.
CN202011327160.9A 2020-11-23 2020-11-23 Verification code generation method based on image style conversion Pending CN112905977A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011327160.9A CN112905977A (en) 2020-11-23 2020-11-23 Verification code generation method based on image style conversion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011327160.9A CN112905977A (en) 2020-11-23 2020-11-23 Verification code generation method based on image style conversion

Publications (1)

Publication Number Publication Date
CN112905977A true CN112905977A (en) 2021-06-04

Family

ID=76111408

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011327160.9A Pending CN112905977A (en) 2020-11-23 2020-11-23 Verification code generation method based on image style conversion

Country Status (1)

Country Link
CN (1) CN112905977A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109063456A (en) * 2018-08-02 2018-12-21 浙江大学 The safety detecting method and system of image-type identifying code
CN110246197A (en) * 2019-05-21 2019-09-17 北京奇艺世纪科技有限公司 Identifying code character generating method, device, electronic equipment and storage medium
US20200065471A1 (en) * 2017-11-14 2020-02-27 Tencent Technology (Shenzhen) Company Limited Security verification method and relevant device
CN111402124A (en) * 2020-03-24 2020-07-10 支付宝(杭州)信息技术有限公司 Method and device for generating texture image and synthetic image
CN111652233A (en) * 2020-06-03 2020-09-11 哈尔滨工业大学(威海) An automatic recognition method of text verification code for complex background

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200065471A1 (en) * 2017-11-14 2020-02-27 Tencent Technology (Shenzhen) Company Limited Security verification method and relevant device
CN109063456A (en) * 2018-08-02 2018-12-21 浙江大学 The safety detecting method and system of image-type identifying code
CN110246197A (en) * 2019-05-21 2019-09-17 北京奇艺世纪科技有限公司 Identifying code character generating method, device, electronic equipment and storage medium
CN111402124A (en) * 2020-03-24 2020-07-10 支付宝(杭州)信息技术有限公司 Method and device for generating texture image and synthetic image
CN111652233A (en) * 2020-06-03 2020-09-11 哈尔滨工业大学(威海) An automatic recognition method of text verification code for complex background

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
曹雄港: "基于图像风格转换的验证码生成技术研究", 《万方数据》, 13 November 2020 (2020-11-13), pages 11 - 55 *

Similar Documents

Publication Publication Date Title
CN111885035B (en) Network anomaly detection method, system, terminal and storage medium
CN111784348B (en) Account risk identification method and device
CN104899508B (en) A kind of multistage detection method for phishing site and system
WO2019201295A1 (en) File identification method and feature extraction method
CN109450845A (en) A kind of algorithm generation malice domain name detection method based on deep neural network
CN110166454A (en) A kind of composite character selection intrusion detection method based on self-adapted genetic algorithm
CN114417427A (en) A deep learning-oriented data sensitive attribute desensitization system and method
CN118133146B (en) An artificial intelligence-based method for identifying risk intrusions in the Internet of Things
CN115270996A (en) DGA domain name detection method, detection device and computer storage medium
CN113378160A (en) Graph neural network model defense method and device based on generative confrontation network
CN113947579B (en) Confrontation sample detection method for image target detection neural network
CN112260818A (en) Side channel curve enhancement method, side channel attack method and side channel attack device
CN115913643A (en) Network intrusion detection method, system and medium based on countermeasure self-encoder
CN107977461A (en) A kind of video feature extraction method and device
CN113949549A (en) Real-time traffic anomaly detection method for intrusion and attack defense
CN114866246B (en) Computer network security intrusion detection method based on big data
CN116232694A (en) Lightweight network intrusion detection method and device, electronic equipment and storage medium
CN110290101B (en) Deep trust network-based associated attack behavior identification method in smart grid environment
CN111159588B (en) A Malicious URL Detection Method Based on URL Imaging Technology
CN115510986A (en) A Adversarial Sample Generation Method Based on AdvGAN
CN112905977A (en) Verification code generation method based on image style conversion
CN111737688B (en) Attack defense system based on user portrait
CN113948067A (en) Voice countercheck sample repairing method with hearing high fidelity characteristic
CN113709152A (en) Antagonistic domain name generation model with high-resistance detection capability
CN118802258A (en) Intelligent analysis data security identification method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210604

RJ01 Rejection of invention patent application after publication