CN108563765B - Intelligent image-text matching method and system - Google Patents
Intelligent image-text matching method and system Download PDFInfo
- Publication number
- CN108563765B CN108563765B CN201810353977.XA CN201810353977A CN108563765B CN 108563765 B CN108563765 B CN 108563765B CN 201810353977 A CN201810353977 A CN 201810353977A CN 108563765 B CN108563765 B CN 108563765B
- Authority
- CN
- China
- Prior art keywords
- picture
- threshold
- keywords
- keyword
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 41
- 230000006855 networking Effects 0.000 claims description 6
- 238000002372 labelling Methods 0.000 claims description 4
- 239000002699 waste material Substances 0.000 abstract 1
- 238000012544 monitoring process Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 241000282414 Homo sapiens Species 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
Images
Landscapes
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention relates to an intelligent image-text matching method and system, which judges whether character information is matched with an image or not and outputs a matching result; if the matching is carried out, the picture and the characters are displayed, and if the matching is not carried out, the picture is not displayed, and only the character information is displayed. The method and the device can reduce unnecessary picture information to the maximum extent when browsing news or characters, help the user to save traffic, and simultaneously help the user to click on unnecessary characters due to the picture information which is not related to the characters, thereby reducing unnecessary traffic waste caused by a banner party.
Description
Technical Field
The invention relates to the field of computers, in particular to an intelligent image-text matching method and system.
Background
In the prior art, with the development of science and technology, the demand of human beings on data processing services is increasing day by day, a mobile phone is used as information browsing and is used as a main channel for people to connect external information, more and more people do not need to know news of all places through televisions or newspapers, however, because the network news brings fast real-time news and various self media are added to acquire flow, the news value is measured through the flow, and more news media are used for producing the news, through pictures or headline parties which are not related to characters or pictures which can attract users, the users are attracted to click on news, the purpose of acquiring more traffic is achieved, this is then unnecessary for users of news browsing, the pictures cause extra traffic loss, and the pictures are completely unrelated to the text, which is annoying to the user to some extent. Therefore, browsing news in the prior art has certain defects for users.
Content of application
In order to solve the technical problems: the application provides a picture and text matching method, which comprises the following steps:
judging whether the character information is matched with the picture or not, and outputting a matching result;
if the matching is carried out, the picture and the characters are displayed, and if the matching is not carried out, the picture is not displayed, and only the character information is displayed.
The image-text matching method specifically comprises the following steps: the step of judging whether the text information is matched with the picture direction specifically comprises the following steps:
searching whether the characters contain characters with non-matching pictures and texts, and if the characters contain characters with non-matching pictures and texts, determining that the pictures are not matched with the character information;
the image-text matching method specifically comprises the following steps: the step of judging whether the text information is matched with the picture direction specifically comprises the following steps:
searching picture information and outputting picture keywords;
comparing the picture key words with the text information, and judging whether the text information contains the picture key words;
if yes, outputting a matching result, and if not, outputting no match.
The image-text matching method, wherein the searching for the picture information and the outputting of the picture keywords specifically comprise:
and connecting the picture to a network, searching the picture, determining keywords of the picture, and outputting a plurality of keywords related to the picture in a list mode.
In the image-text matching method, the determining of the keywords of the image specifically includes: detecting titles or character information related to pictures, comparing a plurality of titles, extracting words with frequency exceeding a search threshold, arranging the words exceeding the search threshold according to a frequency sequence, comparing an arrangement result with a keyword threshold, and outputting the words exceeding the keyword threshold.
The image-text matching method comprises the steps that the search threshold comprises a first search threshold and a second search threshold, the first search threshold is larger than the second search threshold, when the frequency of words exceeds the first search threshold, the words are determined as determined keywords, when the frequency of the words is between the first search threshold and the second search threshold, the words are determined as selectable keywords, and when the frequency of the words is smaller than the second search threshold, the words are determined as fuzzy keywords; the keyword threshold comprises a first keyword threshold and a second keyword threshold, when the determined keyword is smaller than the first keyword threshold, a difference value between the first keyword threshold and the determined keyword is selected from the selectable keywords, an output picture keyword is obtained according to the sum of the difference value and the determined keyword, when the determined keyword is larger than the first keyword threshold, the current determined keyword is output as the output picture keyword, and when the determined keyword is larger than the second keyword threshold, a corresponding number of keywords of the second keyword threshold are output as the output picture keyword.
In the image-text matching method, if matching, displaying the image and the text comprises: and labeling the pictures, associating the pictures with the characters according to the picture distribution with different labels, not displaying the pictures which cannot be associated with the characters, and finding out the label associated symbols corresponding to the characters and the pictures with the characters.
The image-text matching method does not display the image if the image-text matching method is not matched, and only displays the text information, and then the method further comprises the following steps: typesetting the text information again, determining the author of the text information, recording the author into a record table, comparing the record with an early warning value, if the recording times of the author exceed the early warning value, performing early warning prompt, and correspondingly recording the early warning prompt times, if the early warning prompt times exceed a preset value, listing the corresponding author into a blacklist, and simultaneously reporting the blacklist in the blacklist; if the author is the first record, filling the author into a fixed position of a record table.
A teletext matching system comprising:
the searching module is used for searching picture keywords and character information;
the matching judgment module is used for judging whether the character information is matched with the picture or not and outputting a matching result;
and the output module is used for outputting a result according to the matching result, if the result is matched, the picture and the characters are displayed, and if the result is not matched, the picture is not displayed, and only the character information is displayed.
The image-text matching system further comprises a threshold setting module, a networking module, a sorting module, a text typesetting module, an author information recording module and an author information early warning module, wherein the threshold setting module is used for setting a search threshold and a keyword threshold, the search threshold comprises a first search threshold and a second search threshold, and the keyword threshold comprises a first keyword threshold and a second keyword threshold; the networking module is used for connecting a network and searching keywords for the pictures; the sorting module is used for sorting the searched picture keywords; the text typesetting module is used for typesetting the text information again and performing associated operation on the pictures and texts; the author information recording module is used for recording author information with unmatched images and texts, and the author information early warning module is used for early warning authors with unmatched images and texts and with records exceeding the early warning value.
Whether this application can match with the characters by automatic identification picture, the flow when furthest's saving user browses the news reduces the puzzlement that unnecessary picture information brought for the user, promotes the user and reads the experience effect of news or browse the characters information.
Drawings
Fig. 1 is a schematic diagram of the image-text matching method of the present invention.
Fig. 2 is a schematic diagram of the image-text matching system of the present invention.
Detailed Description
The present application will now be described in further detail with reference to the drawings, it should be noted that the following detailed description is given for illustrative purposes only and is not to be construed as limiting the scope of the present application, as those skilled in the art will be able to make numerous insubstantial modifications and adaptations to the present application based on the above disclosure.
The first embodiment is as follows:
as shown in fig. 1, the invention provides an intelligent image-text matching method, which comprises the following steps:
judging whether the character information is matched with the picture or not, and outputting a matching result;
if the matching is carried out, the picture and the characters are displayed, and if the matching is not carried out, the picture is not displayed, and only the character information is displayed.
The image-text matching method specifically comprises the following steps: the step of judging whether the text information is matched with the picture direction specifically comprises the following steps:
searching whether the characters contain characters with non-matching pictures and texts, and if the characters contain characters with non-matching pictures and texts, determining that the pictures are not matched with the character information;
the image-text matching method specifically comprises the following steps: the step of judging whether the text information is matched with the picture direction specifically comprises the following steps:
searching picture information and outputting picture keywords;
comparing the picture key words with the text information, and judging whether the text information contains the picture key words;
if yes, outputting a matching result, and if not, outputting no match.
The image-text matching method, wherein the searching for the picture information and the outputting of the picture keywords specifically comprise:
and connecting the picture to a network, searching the picture, determining keywords of the picture, and outputting a plurality of keywords related to the picture in a list mode.
In the image-text matching method, the determining of the keywords of the image specifically includes: detecting titles or character information related to pictures, comparing a plurality of titles, extracting words with frequency exceeding a search threshold, arranging the words exceeding the search threshold according to a frequency sequence, comparing an arrangement result with a keyword threshold, and outputting the words exceeding the keyword threshold.
The image-text matching method comprises the steps that the search threshold comprises a first search threshold and a second search threshold, the first search threshold is larger than the second search threshold, when the frequency of words exceeds the first search threshold, the words are determined as determined keywords, when the frequency of the words is between the first search threshold and the second search threshold, the words are determined as selectable keywords, and when the frequency of the words is smaller than the second search threshold, the words are determined as fuzzy keywords; the keyword threshold comprises a first keyword threshold and a second keyword threshold, when the determined keyword is smaller than the first keyword threshold, a difference value between the first keyword threshold and the determined keyword is selected from the selectable keywords, an output picture keyword is obtained according to the sum of the difference value and the determined keyword, when the determined keyword is larger than the first keyword threshold, the current determined keyword is output as the output picture keyword, and when the determined keyword is larger than the second keyword threshold, a corresponding number of keywords of the second keyword threshold are output as the output picture keyword.
In the image-text matching method, if matching, displaying the image and the text comprises: and labeling the pictures, associating the pictures with the characters according to the picture distribution with different labels, not displaying the pictures which cannot be associated with the characters, and finding out the label associated symbols corresponding to the characters and the pictures with the characters.
The image-text matching method does not display the image if the image-text matching method is not matched, and only displays the text information, and then the method further comprises the following steps: typesetting the text information again, determining the author of the text information, recording the author into a record table, comparing the record with an early warning value, if the recording times of the author exceed the early warning value, performing early warning prompt, and correspondingly recording the early warning prompt times, if the early warning prompt times exceed a preset value, listing the corresponding author into a blacklist, and simultaneously reporting the blacklist in the blacklist; if the author is the first record, filling the author into a fixed position of a record table.
Example two:
the invention provides a method for monitoring the flow rate of image-text mismatching, which comprises the following steps:
judging whether the character information is matched with the picture or not, and outputting a matching result;
if the matching is carried out, displaying the pictures and the characters; and if not, monitoring the flow consumption value, judging whether the flow consumption value exceeds a flow threshold value, and if the flow consumption value exceeds the flow threshold value, not displaying the picture and only displaying the text information.
The specific method for monitoring the flow consumption value comprises the following steps:
step 1: carrying out flow monitoring on image-text mismatching in a monitoring period to form a flow loss waveform; selecting one day or half day or a fixed time period as a monitoring period;
step 2: analyzing the flow loss waveform to obtain the amplitude xi of the wave in the monitoring period, wherein i is a natural number from 1 to n, and n is the number of the waves in one monitoring period;
and step 3: calculating a mean calculation value Yn of the amplitude xi of the wave by performing a recursive calculation by the following formula (1): yn ═ 1-k (Yn-1 + kxn (1)
In the formula (1), xn is the amplitude of the wave measured at the nth time; yn is the average calculation value of the amplitude xi when the nth recursion is performed; k is a calculation constant;
and 4, step 4: carrying out recursive calculation again through a formula (2) to obtain a calculated value of a mean square value Zn of the amplitude xi;
Zn=(1-k)Zn-1+kxn2 (2)
zn is a calculated value of the mean square value of the amplitude xi when the nth recursion is carried out;
and 5: calculating a standard value standard (xi) of the calculated value of xi by a formula (3);
standard(x)=Yn2-Zn (3);
step 6: then, performing binary judgment on xi through a formula (4);
S=xi-Yn*standard(xi) (4)
and 7: when S is larger than zero, the flow rate consumed when the pictures and texts are not matched is considered to be excessive, namely the flow rate exceeds a threshold value.
The image-text matching method specifically comprises the following steps: the step of judging whether the text information is matched with the picture direction specifically comprises the following steps:
searching whether the characters contain characters with non-matching pictures and texts, and if the characters contain characters with non-matching pictures and texts, determining that the pictures are not matched with the character information;
the image-text matching method specifically comprises the following steps: the step of judging whether the text information is matched with the picture direction specifically comprises the following steps:
searching picture information and outputting picture keywords;
comparing the picture key words with the text information, and judging whether the text information contains the picture key words;
if yes, outputting a matching result, and if not, outputting no match.
The image-text matching method, wherein the searching for the picture information and the outputting of the picture keywords specifically comprise:
and connecting the picture to a network, searching the picture, determining keywords of the picture, and outputting a plurality of keywords related to the picture in a list mode.
In the image-text matching method, the determining of the keywords of the image specifically includes: detecting titles or character information related to pictures, comparing a plurality of titles, extracting words with frequency exceeding a search threshold, arranging the words exceeding the search threshold according to a frequency sequence, comparing an arrangement result with a keyword threshold, and outputting the words exceeding the keyword threshold.
The image-text matching method comprises the steps that the search threshold comprises a first search threshold and a second search threshold, the first search threshold is larger than the second search threshold, when the frequency of words exceeds the first search threshold, the words are determined as determined keywords, when the frequency of the words is between the first search threshold and the second search threshold, the words are determined as selectable keywords, and when the frequency of the words is smaller than the second search threshold, the words are determined as fuzzy keywords; the keyword threshold comprises a first keyword threshold and a second keyword threshold, when the determined keyword is smaller than the first keyword threshold, a difference value between the first keyword threshold and the determined keyword is selected from the selectable keywords, an output picture keyword is obtained according to the sum of the difference value and the determined keyword, when the determined keyword is larger than the first keyword threshold, the current determined keyword is output as the output picture keyword, and when the determined keyword is larger than the second keyword threshold, a corresponding number of keywords of the second keyword threshold are output as the output picture keyword.
In the image-text matching method, if matching, displaying the image and the text comprises: and labeling the pictures, associating the pictures with the characters according to the picture distribution with different labels, not displaying the pictures which cannot be associated with the characters, and finding out the label associated symbols corresponding to the characters and the pictures with the characters.
The image-text matching method does not display the image if the image-text matching method is not matched, and only displays the text information, and then the method further comprises the following steps: typesetting the text information again, determining the author of the text information, recording the author into a record table, comparing the record with an early warning value, if the recording times of the author exceed the early warning value, performing early warning prompt, and correspondingly recording the early warning prompt times, if the early warning prompt times exceed a preset value, listing the corresponding author into a blacklist, and simultaneously reporting the blacklist in the blacklist; if the author is the first record, filling the author into a fixed position of a record table.
Example three:
as shown in fig. 2, the image-text matching system of the present invention includes:
the searching module is used for searching picture keywords and character information;
the matching judgment module is used for judging whether the character information is matched with the picture or not and outputting a matching result;
and the output module is used for outputting a result according to the matching result, if the result is matched, the picture and the characters are displayed, and if the result is not matched, the picture is not displayed, and only the character information is displayed.
The image-text matching system further comprises a threshold setting module, a networking module, a sorting module, a text typesetting module, an author information recording module and an author information early warning module, wherein the threshold setting module is used for setting a search threshold and a keyword threshold, the search threshold comprises a first search threshold and a second search threshold, and the keyword threshold comprises a first keyword threshold and a second keyword threshold; the networking module is used for connecting a network and searching keywords for the pictures; the sorting module is used for sorting the searched picture keywords; the text typesetting module is used for typesetting the text information again and performing associated operation on the pictures and texts; the author information recording module is used for recording author information with unmatched images and texts, and the author information early warning module is used for early warning authors with unmatched images and texts and with records exceeding the early warning value.
Whether this application can match with the characters by automatic identification picture, the flow when furthest's saving user browses the news reduces the puzzlement that unnecessary picture information brought for the user, promotes the user and reads the experience effect of news or browse the characters information.
Claims (5)
1. An intelligent image-text matching method is characterized by comprising the following steps:
judging whether the character information is matched with the picture or not, and outputting a matching result;
if the matching is carried out, the picture and the characters are displayed, and if the matching is not carried out, the picture is not displayed, and only the character information is displayed;
the step of judging whether the text information is matched with the picture specifically comprises the following steps: searching whether the characters contain characters with non-matching pictures and texts, and if the characters contain characters with non-matching pictures and texts, determining that the pictures are not matched with the character information;
the step of judging whether the text information is matched with the picture specifically comprises the following steps: searching picture information and outputting picture keywords;
comparing the picture key words with the text information, and judging whether the text information contains the picture key words;
if yes, outputting a matching result, and if not, outputting mismatching;
the searching for the picture information and outputting the picture keywords specifically comprise: connecting the picture to a network, searching the picture, determining keywords of the picture, and outputting a plurality of keywords related to the picture in a list mode;
the determining of the keywords of the picture specifically includes: detecting titles or character information related to pictures, comparing a plurality of titles, extracting words with frequency exceeding a search threshold, arranging the words exceeding the search threshold according to a frequency sequence, comparing an arrangement result with a keyword threshold, and outputting the words exceeding the keyword threshold.
2. The teletext matching method according to claim 1, wherein the search threshold comprises a first search threshold and a second search threshold, the first search threshold being greater than the second search threshold, the word being determined as a determined keyword when the frequency of the word exceeds the first search threshold, the word being determined as a selectable keyword when the frequency of the word is between the first search threshold and the second search threshold, the word being determined as a fuzzy keyword when the frequency of the word is less than the second search threshold; the keyword threshold comprises a first keyword threshold and a second keyword threshold, when the number of the determined keywords is smaller than the first keyword threshold, the number of the selectable keywords is the difference between the first keyword threshold and the determined keywords, the output picture keywords are obtained by summing the difference and the determined keywords, when the number of the determined keywords is larger than the first keyword threshold, the current determined keywords are output as the output picture keywords, and when the determined keywords are larger than the second keyword threshold, the keywords with the corresponding number of the second keyword threshold are output as the output picture keywords.
3. The teletext matching method of claim 1, wherein displaying the picture and text if matched comprises: and labeling the pictures, respectively associating the pictures with the characters according to different labels, not displaying the pictures which cannot be associated with the characters, and finding out the label associated symbols corresponding to the characters and the pictures with the characters.
4. The teletext matching method of claim 1, wherein if there is no match, no picture is displayed, only after displaying the textual information, further comprising: typesetting the text information again, determining the author of the text information, recording the author into a record table, comparing the record with an early warning value, if the recording times of the author exceed the early warning value, performing early warning prompt, and correspondingly recording the early warning prompt times, if the early warning prompt times exceed a preset value, listing the corresponding author into a blacklist, and simultaneously reporting the blacklist in the blacklist; if the author is the first record, filling the author into a fixed position of a record table.
5. An intelligent image-text matching system, characterized by comprising:
the searching module is used for searching picture keywords and character information;
the matching judgment module is used for judging whether the character information is matched with the picture or not and outputting a matching result;
the output module is used for outputting a result according to the matching result, if the result is matched with the matching result, the picture and the characters are displayed, and if the result is not matched with the matching result, the picture is not displayed, and only the character information is displayed;
the system comprises a threshold setting module, a networking module, a sorting module, a text typesetting module, an author information recording module and an author information early warning module, wherein the threshold setting module is used for setting a search threshold and a keyword threshold, the search threshold comprises a first search threshold and a second search threshold, and the keyword threshold comprises a first keyword threshold and a second keyword threshold; the networking module is used for connecting a network and searching keywords for the pictures; the sorting module is used for sorting the searched picture keywords; the text typesetting module is used for typesetting the text information again and performing associated operation on the pictures and texts; the author information recording module is used for recording author information with unmatched images and the author information early warning module is used for early warning authors with unmatched images and with recorded times exceeding the early warning value.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810353977.XA CN108563765B (en) | 2018-04-19 | 2018-04-19 | Intelligent image-text matching method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810353977.XA CN108563765B (en) | 2018-04-19 | 2018-04-19 | Intelligent image-text matching method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108563765A CN108563765A (en) | 2018-09-21 |
CN108563765B true CN108563765B (en) | 2021-11-16 |
Family
ID=63535925
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810353977.XA Active CN108563765B (en) | 2018-04-19 | 2018-04-19 | Intelligent image-text matching method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108563765B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112445908A (en) * | 2019-08-29 | 2021-03-05 | 北京京东尚科信息技术有限公司 | Commodity comment information display method and device, electronic equipment and storage medium |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN100530242C (en) * | 2007-09-14 | 2009-08-19 | 北大方正集团有限公司 | Picture and words typesetting method |
CN104360797A (en) * | 2014-10-16 | 2015-02-18 | 广州三星通信技术研究有限公司 | Content display method and system for electronic equipment |
CN106412213A (en) * | 2015-07-27 | 2017-02-15 | 中兴通讯股份有限公司 | Contact person information processing method and device and mobile terminal |
CN105243331A (en) * | 2015-10-23 | 2016-01-13 | 中国联合网络通信集团有限公司 | Encryption device and encryption method, and decryption device and decryption method |
-
2018
- 2018-04-19 CN CN201810353977.XA patent/CN108563765B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN108563765A (en) | 2018-09-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US12073621B2 (en) | Method and apparatus for detecting information insertion region, electronic device, and storage medium | |
WO2020140360A1 (en) | Clipboard-based information pushing method and system, and terminal device | |
CN106446135B (en) | Multimedia data label generation method and device | |
CN108920675B (en) | An information processing method, device, computer storage medium and terminal | |
CN110413787B (en) | Text clustering method, device, terminal and storage medium | |
US7475007B2 (en) | Expression extraction device, expression extraction method, and recording medium | |
CN111754302A (en) | Video live broadcast interface commodity display intelligent management system based on big data | |
TW201839628A (en) | Method, system and apparatus for discovering and tracking hot topics from network media data streams | |
CN107315823B (en) | Data processing method and device based on electronic commerce | |
CN102708185A (en) | Picture voice searching method | |
CN102262670A (en) | Cross-media information retrieval system and method based on mobile visual equipment | |
CN112925905B (en) | Method, device, electronic equipment and storage medium for extracting video subtitles | |
CN111813929B (en) | Information processing method and device and electronic equipment | |
WO2019179014A1 (en) | Method, apparatus, computer device, and storage medium for searching and displaying voice messages | |
CN115099239B (en) | Resource identification method, device, equipment and storage medium | |
CN112818984B (en) | Title generation method, device, electronic equipment and storage medium | |
CN109284367B (en) | Method and device for processing text | |
CN108549708B (en) | Image-text matching method and system | |
CN110475158B (en) | Video learning material providing method and device, electronic equipment and readable medium | |
CN108563765B (en) | Intelligent image-text matching method and system | |
CN111555960A (en) | Method for generating information | |
CN111783515A (en) | Behavior action recognition method and device | |
CN115730073A (en) | Text processing method, device and storage medium | |
CN114881685A (en) | Advertisement delivery method, device, electronic device and storage medium | |
CN101286175A (en) | A tag-based file presentation method and system thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20211021 Address after: 518000 14b, 14th floor, Maoye Times Square, No. 288, Hyde 2nd Road, Haizhu community, Yuehai street, Nanshan District, Shenzhen, Guangdong Applicant after: Aiyouya information technology (Shenzhen) Co.,Ltd. Address before: 6288, 6th floor, building 2, No. 2, No. 117, Zhangcha 1st Road, Chancheng District, Foshan City, Guangdong Province Applicant before: FOSHAN LONGSHENG GUANGQI TECHNOLOGY Co.,Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |