Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1 is a flowchart of a translation method according to an embodiment of the present invention, where the present embodiment is applicable to a case of translating text information in a picture, the method may be executed by a translation apparatus, the apparatus may be implemented by software and/or hardware, and may be generally integrated in a terminal, and the method includes the following steps:
and S110, acquiring the picture containing the text information.
In an embodiment of the present invention, optionally, acquiring the picture containing the text information may include: and acquiring the picture containing the text information through the functions of photographing, storing, screenshot and downloading of the intelligent terminal equipment. The picture containing the text information may be a picture including only text information, or may be a picture including both text information and non-text information, including but not limited to a person, a scene, and the like. The text information in the picture may be words or sentences, and the text information includes, but is not limited to, english, korean, japanese, german, chinese, and so on.
S120, generating a text translation option when the picture is displayed; wherein the text translation options include a word translation option and a sentence translation option.
In an embodiment of the present invention, optionally, when the picture is displayed, generating a text translation option may include: and when the picture is displayed, generating a text translation option, and displaying the text translation option at the top of the screen or other positions where no text information is displayed. Text translation options include, but are not limited to, word translation options and sentence translation options. After the user selects the word translation option, translation service of one or more words can be provided for the user; after the user selects the sentence translation option, one or more sentence translation services can be provided for the user.
And S130, if the triggering operation of the word translation option is detected, translating the target word corresponding to the target position of the screen in the picture display area triggered by the user.
In an embodiment of the present invention, optionally, the triggering operation on the word translation option may include: and completing the triggering operation of the word translation option by clicking or inputting a command by voice. The user only needs to manually click the 'word translation' option by clicking the trigger word translation option, and the user needs to send a voice command, such as 'word translation', by triggering the word translation option in a voice input command mode. The target position may be obtained in a click manner, and the target position may be one or multiple, so that the corresponding target word may be one or multiple. In addition, when translating the target word, the target word can be translated by using a local word stock and/or a network translation mode.
In an embodiment of the present invention, optionally, the translating the target word corresponding to the target position of the screen in the picture display area triggered by the user may include: after the triggering operation of the word translation option is detected, dividing a screen into word areas according to text information in the picture; acquiring a target position of a screen in a picture display area triggered by a user, and determining a target word area where the target position is located; and identifying a target word in the target word area, and translating the target word.
Specifically, the operation of dividing the screen into word areas is to divide the screen into a plurality of areas according to the area covered by each word. When the target position of the screen triggered by the user is in the coverage area of a word, the area of the word is the target word area, the word is the target word, and the target word is translated.
Or all words in the text information are automatically divided and framed in the screen, the user can quickly search for the target word according to the words automatically divided and framed in the word area, and trigger one or more target positions of the screen in the picture display area by clicking one or more words selected by the frame, so that the target word area where one or more target words are located is determined according to the one or more target positions, and the one or more target words are determined.
When a plurality of target words are translated, the translation results may be displayed in order of the determination of the target word regions.
And S140, if the trigger operation of the sentence translation option is detected, translating the target sentence corresponding to the mark of the user on the display screen.
In an embodiment of the present invention, optionally, the triggering operation on the sentence translation option may include: and finishing the triggering operation of the sentence translation option by clicking or inputting an instruction by voice. The user only needs to manually click the sentence translation option by clicking the sentence translation triggering option, and the user needs to send a voice command, such as sentence translation, to trigger the sentence translation option in a voice input command mode. The user's identification on the display screen includes, but is not limited to, a straight line, a curve, a plurality of line segments, a rectangle, a circle, an irregular polygon, and the like. The target sentence may be one or more sentences, wherein the plurality of sentences may be selected continuously or discontinuously. In addition, when the target sentence is translated, the target sentence can be translated by adopting a local word stock and/or a network translation mode.
In an embodiment of the present invention, optionally, the translating a sentence corresponding to the user identifier on the display screen includes: dividing sentence areas of a screen according to text information in the picture; acquiring a sliding track of a user on a screen, and determining a target sentence area where the sliding track is located; and identifying a target sentence in the target sentence sub-region, and translating the target sentence.
Specifically, the operation of dividing the sentence region of the screen is to divide the screen into a plurality of regions according to the region covered by each sentence, wherein the region in which each sentence is located is a sentence region. When the area where the sliding track on the screen is located is a sentence area, the sentence area is a target sentence area, and the sentence in the target sentence area is a target sentence. For example, there are two sentences, a and B, respectively, in the text information displayed on the screen, and the screen is divided into two regions, one is the region where sentence a is located and one is the region where sentence B is located. And if the sliding track of the user on the screen is in the area of the sentence A, translating the sentence A.
Or all sentences in the text information are automatically divided in the screen and marked out in a underline mode, a user can quickly search a target sentence according to the sentence which is automatically divided and framed and selected in the sentence region, and one or more target positions of the screen in the picture display region are triggered by clicking one or more underlines, so that the target sentence region where one or more target sentences are located is determined according to the one or more target positions, one or more target sentences are further determined, and the sentences are translated, and the operation efficiency is improved.
When a plurality of target sentences are translated, the translation results may be sequentially displayed in a certain order of the target sentence regions.
According to the embodiment of the invention, the picture containing the text information is obtained, the corresponding target word or the corresponding target sentence is translated according to the generated word translation option and sentence translation option, and the word or the sentence needing to be translated does not need to be input, so that the translation efficiency can be improved, and the translation operation can be simplified.
On the basis of the above technical solution, optionally, if the text information includes a print word, recognizing a word or a sentence by using an Optical Character Recognition (OCR) algorithm; and if the text information comprises handwritten characters, recognizing words or sentences by adopting a dual recognition engine recognition algorithm based on OCR. The dual recognition engine recognition algorithm based on the OCR is to adopt two recognition engines of the OCR recognition algorithm and the corner feature database to recognize characters at the same time.
The OCR technology is a technology of determining a shape of a character by checking the character printed on paper, detecting a dark and light pattern, and then translating the shape into a computer word by a character recognition method, and a specific implementation manner of the OCR is a well-known technology and will not be described in detail herein. It should be noted that the printed text includes, but is not limited to, artistic characters and conventional fonts in various languages, such as sons font, Times New Rome font, and the like.
Specifically, the dual recognition engine recognition algorithm based on OCR is described by taking the recognition of Chinese characters as an example.
(1) Firstly, an OCR recognition algorithm is adopted to describe outline strokes of the Chinese characters to be recognized. The strokes are described by outlines, a 'short side' using a plurality of irrelevant large offices is generalized and swallowed by a 'long side', a 'curve' is described as a 'two-segment line segment', and a 'local concave-convex' is generalized and merged according to the surrounding environment. Thus, a complex word can be described with "as few strokes as possible". The recognition method has high recognition speed, and the recognition result can be directly obtained from the strokes without searching a database. If the strokes are sufficiently distinct, the handwriting written by different people can be easily recognized. But if the strokes are ambiguous, they are difficult to describe, and can be further recognized by a second recognition engine.
(2) A database of corner features is used as a second recognition engine. The method specifically comprises the following steps: assuming that each Chinese character is represented by a 32 × 32 lattice, the Chinese character lattice is scanned from four directions, i.e., up, down, left, and right, to obtain four-side contour features of four values P1, P2, P3, and P4. To further identify such things as: chinese characters with the same four-side characteristic values, such as 'nation', 'prison', 'four', 'cause', and the like, can be cut again (cut according to 1/4) in the Chinese characters, and then the four-side characteristics Q1, Q2, Q3 and Q4 are taken after cutting, so that the characteristic description of one character is P1, P2, P3, P4, Q1, Q2, Q3 and Q4, and the eight values are stored in a database. A database of the characteristics of the inner side and the outer side of all Chinese characters is obtained through a large amount of learning and memory. When the Chinese characters are identified, a most similar Chinese character is searched from the database to obtain an identification result.
It should be noted that, for characters of different languages, the OCR character recognition algorithm can perform recognition; the second recognition engine in the OCR-based dual recognition engine recognition algorithm can make a corresponding contour feature extraction method aiming at the characteristics of characters of different languages, and detailed description is omitted.
On the basis of the above embodiment, optionally, the text translation option further includes a paragraph translation option; the method further comprises the following steps: if the triggering operation of the paragraph translation option is detected, acquiring a closed graph drawn on a screen by a user; and identifying a target text paragraph in the closed graph, and translating the target text paragraph.
The triggering operation on the paragraph translation option may include: and finishing the triggering operation of the paragraph translation option by clicking or inputting a command by voice. The user only needs to manually click the bullet box of the paragraph translation option by clicking the trigger paragraph translation option, and the paragraph translation option is triggered by a voice input instruction mode, so that the user sends a voice instruction, such as paragraph translation. It should be noted that the closed figures include, but are not limited to, rectangles, circles, ovals, or irregular polygons, and the closed figures may be one or more. The recognition of the target text passage may employ an OCR recognition method.
On the basis of the above technical solution, optionally, the method further includes: judging whether each line of characters in the text information is on the same horizontal line or whether each line of characters in the text information is on the same vertical line; if not, carrying out inclination correction on the picture so as to enable each line of characters in the text information to be on the same horizontal line or whether each line of characters is on the same vertical line.
Before translation, the text information in the picture needs to be subjected to tilt correction so as to improve the recognition efficiency of the text. According to the common typesetting habit, most texts are generally arranged in the horizontal direction, and in some cases, the texts are arranged in the vertical direction, such as newspapers, hand-copy newspapers, and the like. For the text information typeset according to the horizontal direction, judging whether each line of characters is on the same horizontal line; for text information typeset according to the vertical direction, whether each row of characters are on the same vertical line is judged.
The tilt correction algorithm includes, but is not limited to, Hough Transform (Hough Transform) method, two-point method, and the like. In the embodiment of the invention, the Hough transform method is used for illustrating the case of performing the inclination correction on the picture.
The Hough transformation method comprises the following steps:
(1) generating an image pyramid;
(2) taking the highest layer image;
(3) extracting the horizontal edge of the image, and carrying out Hough transformation by using a formula xcos theta + ysin theta ═ rho;
(4) processing the Hough transformation accumulation result by using a formula T ═ gamma maxA (rho, theta), wherein A (rho, theta) ═ 0if A (rho, theta) is less than or equal to T, and searching and calculating an inclination angle;
(5) if the obtained inclination angle meets the precision requirement, turning to the step (7);
(6) angle refinement, taking down a layer of image, and turning to the step (3);
(7) rotating the image;
(8) and (6) ending.
It should be noted that the specific implementation manner of the Hough transform method is a known technology, and meanwhile, an improvement may be made on the basis of the Hough transform method provided by the embodiment of the present invention to obtain a tilt correction method with higher accuracy.
Therefore, the identification efficiency of the text information can be improved through the inclination correction of the text information, and the translation accuracy is improved.
Example two
Fig. 2 is a flowchart of a translation method according to a second embodiment of the present invention, where the method includes the following steps:
s210, obtaining the picture containing the text information.
S220, generating a text translation option when the picture is displayed; wherein the text translation options include a word translation option, a sentence translation option, and a paragraph translation option.
S230, opening a selection bullet frame of the translation language when detecting the preset action of the user on the screen; and when receiving an instruction selected by a target translation language, closing the selection bullet box.
In one embodiment of the invention, if a preset action of the user on the screen is detected, a selection bullet box of the translation language is opened, and the selection bullet box of the translation language provides the target translation language in a graphical form. The preset motion may be a sliding motion or other motions. The target translation languages include, but are not limited to, english, chinese, japanese, korean, and other languages. And when an instruction of selecting the target translation language by the user is received, closing the selection bullet box to normally display the text information in the picture.
S240, if the triggering operation of the word translation option is detected, translating the target word corresponding to the target position of the screen in the picture display area triggered by the user into the corresponding word in the target translation language.
In one embodiment of the invention, the target translation language to the user's selection has been determined prior to the triggering operation of the word translation option. Therefore, once the triggering operation on the word translation option is detected, one or more target words corresponding to the target position of the screen in the picture display area are triggered to be translated into corresponding words in the target translation language for the user. For example, the determined target translation language selected by the user is English, and when the triggering operation of the word translation option is detected, the identified target word is translated into English.
And S250, if the trigger operation of the sentence translation option is detected, translating the target sentence corresponding to the mark on the display screen of the user into the corresponding sentence in the target translation language.
In one embodiment of the invention, the target translation language selected by the user has been determined prior to the triggering operation of the sentence translation option. Accordingly, upon detecting a triggering operation for a sentence translation option, the corresponding target sentence or sentences identified on the display screen by the user are translated into corresponding sentences in the target translation language.
On the basis of the above embodiment, the method may further include: and if the triggering operation on the paragraph translation option is detected, translating the target text paragraph corresponding to the closed graph drawn on the screen by the user into the corresponding paragraph in the target translation language.
In one embodiment of the invention, the target translation language selected by the user has been determined prior to the triggering operation of the paragraph translation option. Thus, upon detecting a trigger action on the paragraph translation option, one or more target text paragraphs corresponding to closed graphics depicted on the screen by the user are translated into corresponding paragraphs in the target translation language.
It should be noted that the embodiment of the present invention not only can complete the translation function of characters of different languages, but also has the function of a dictionary of multiple languages. For example, if the text information in the picture includes idioms in chinese, the words of ancient poetry, literary languages, etc., and an instruction to select chinese as the target translation language is received, the target words, target sentences, or target text paragraphs are annotated. Similarly, the embodiment of the invention is also suitable for dictionary functions of other languages.
According to the technical scheme of the embodiment, after the preset action of the user on the screen is detected, a selection bullet box for translating the language is opened; and when an instruction selected by the target translation language is received, closing the selection bullet box, so that the target word, the target sentence or the target paragraph is translated into a corresponding word, sentence or paragraph in the target translation language, and translation services of multiple optional languages can be provided.
EXAMPLE III
Fig. 3 is a flowchart of a translation method provided in the third embodiment of the present invention, where the method includes the following steps:
s310, obtaining the picture containing the text information.
And S320, generating a text translation option and a translation mode option when the picture is displayed. Wherein the text translation options include a word translation option, a sentence translation option, and a paragraph translation option.
In the embodiment of the invention, the translation modes include but are not limited to a learning mode, an academic mode, an entertainment mode and a news information mode, a user can select different translation modes, and the terminal provides corresponding value-added services according to the translation mode selected by the user after acquiring the translation result.
S330, if the triggering operation of the word translation option is detected, translating the target word corresponding to the target position of the screen in the picture display area triggered by the user, and pushing related information according to the translation result and the translation mode selected by the user.
S340, if the trigger operation of the sentence translation option is detected, translating the target sentence corresponding to the mark of the user on the display screen, and pushing the related information according to the translation result and the translation mode selected by the user.
Specifically, if the translation mode is a learning mode or an academic mode, learning material information related to the translation result is pushed to the user according to the translation result after the translation is completed, wherein the learning material information comprises similar test question questions, academic paper questions and abstracts in related fields and the like; if the translation mode is the entertainment mode, pushing entertainment information related to the translation result according to the translation result after the translation is finished, wherein the entertainment information comprises the source of the network popular language, various related entertainment news and the like; and if the translation mode is a news information mode, pushing news information related to the translation result according to the translation result after the translation is finished, wherein the news information comprises various related domestic and foreign news information and the like. The value added service requires a network connected environment to be provided.
According to the technical scheme of the embodiment, when the picture is displayed, the translation mode options are generated, the translation modes include but are not limited to a learning mode, an academic mode, an entertainment mode and a news information mode, and different value-added services can be provided according to the translation modes and translation results, so that personalized experience is provided for different types of users.
Example four
Fig. 4 is a schematic diagram of a translation apparatus according to a fourth embodiment of the present invention, which is capable of executing a translation method according to any embodiment of the present invention, and has functional modules and beneficial effects corresponding to the execution method.
The device comprises:
the picture acquiring module 410 is configured to acquire a picture containing text information.
A translation option generating module 420, configured to generate a text translation option when the picture is displayed; wherein the text translation options include a word translation option and a sentence translation option.
The first translation module 430 is configured to translate a target word corresponding to a target position of a screen in a picture display area triggered by a user if a trigger operation on a word translation option is detected.
The second translation module 440 is configured to, if a trigger operation on a sentence translation option is detected, translate a target sentence corresponding to an identifier of a user on a display screen.
According to the embodiment of the invention, the picture containing the text information is obtained, the corresponding target word or the corresponding target sentence is translated according to the generated word translation option and sentence translation option, and the word or the sentence needing to be translated does not need to be input, so that the translation efficiency can be improved, and the translation operation can be simplified.
Further, the first translation module 430 is configured to perform triggering operation on a word translation option, and divide a word region on a screen according to text information in a picture;
acquiring a target position of a screen in a picture display area triggered by a user, and determining a target word area where the target position is located;
and identifying a target word in the target word area, and translating the target word.
Further, the second translation module 440 is configured to, if a trigger operation on a sentence translation option is detected, divide a sentence region on a screen according to text information in a picture;
acquiring a sliding track of a user on a screen, and determining a target sentence area where the sliding track is located;
and identifying a target sentence in the target sentence sub-region, and translating the target sentence.
Further, if the text information comprises print characters, recognizing words or sentences by adopting an Optical Character Recognition (OCR) algorithm;
and if the text information comprises handwritten characters, recognizing words or sentences by adopting a dual recognition engine recognition algorithm based on OCR.
Further, the text translation option further comprises a paragraph translation option.
The device further comprises:
the third translation module 450 is configured to, if a trigger operation on the paragraph translation option is detected, obtain a closed graph drawn on a screen by a user;
and identifying a target text paragraph in the closed graph, and translating the target text paragraph.
Further, the apparatus further comprises:
and a bullet box opening module 460, configured to open a selection bullet box for translating the language when a preset action of the user on the screen is detected.
And the bullet box closing module 470 is configured to close the selection bullet box when receiving the instruction selected by the target translation language.
Correspondingly, the translating the target word corresponding to the target position of the screen in the picture display area triggered by the user includes:
and translating the target words into corresponding words in the target translation language.
Correspondingly, the translating the sentence corresponding to the identifier of the user on the display screen includes:
and translating the target sentence into a corresponding sentence in the target translation language.
Further, the apparatus further comprises:
the inclination correction module 480 is configured to determine whether each line of text in the text information is on the same horizontal line or each column of text is on the same vertical line;
if not, carrying out inclination correction on the picture so as to enable each line of characters in the text information to be on the same horizontal line or each column of characters to be on the same vertical line.
The translation device can execute the translation method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method. For technical details that are not described in detail in this embodiment, reference may be made to the translation method provided in any embodiment of the present invention.
EXAMPLE five
Fig. 5 is a schematic structural diagram of a terminal according to a fifth embodiment of the present invention. Fig. 5 illustrates a block diagram of a terminal 512 that is suitable for use in implementing embodiments of the present invention. The terminal 512 shown in fig. 5 is only an example and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 5, the terminal 512 is represented in the form of a general purpose computing device, and has functions of saving pictures by taking pictures, screenshots, and the like, and translating. The components of the terminal 512 may include, but are not limited to: one or more processors 516, a storage device 528, and a bus 518 that couples the various system components including the storage device 528 and the processors 516.
Bus 518 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
The terminal 512 typically includes a variety of computer system readable media. Such media can be any available media that is accessible by terminal 512 and includes both volatile and nonvolatile media, removable and non-removable media.
Storage 528 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)530 and/or cache memory 532. The terminal 512 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 534 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 5, and commonly referred to as a "hard drive"). Although not shown in FIG. 5, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 518 through one or more data media interfaces. Storage 528 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
Program 540 having a set (at least one) of program modules 542 may be stored, for example, in storage 528, such program modules 542 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination may include an implementation of a network environment. The program modules 542 generally perform the functions and/or methods of the described embodiments of the invention.
The terminal 512 may also communicate with one or more external devices 514 (e.g., keyboard, pointing device, camera, display 524, etc.), with one or more devices that enable a user to interact with the terminal 512, and/or with any devices (e.g., network card, modem, etc.) that enable the terminal 512 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interfaces 522. Also, the terminal 512 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via the network adapter 520. As shown, the network adapter 520 communicates with the other modules of the terminal 512 via a bus 518. It should be appreciated that although not shown, other hardware and/or software modules may be used in conjunction with the terminal 512, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processor 516 executes various functional applications and data processing by executing programs stored in the storage device 528, for example, to implement the translation method provided by the above-described embodiment of the present invention.
The terminal can acquire the picture containing the text information, translate the corresponding target word or the corresponding target sentence according to the generated word translation option and sentence translation option, does not need to input the word or the sentence to be translated, and can improve the translation efficiency and simplify the translation operation.
EXAMPLE six
An embodiment of the present invention further provides a computer storage medium storing a computer program, which when executed by a computer processor is configured to perform the translation method according to any one of the above-mentioned embodiments of the present invention.
Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.