EP0712529B1 - Synthesising speech by converting phonemes to digital waveforms - Google Patents
Synthesising speech by converting phonemes to digital waveforms Download PDFInfo
- Publication number
- EP0712529B1 EP0712529B1 EP94922979A EP94922979A EP0712529B1 EP 0712529 B1 EP0712529 B1 EP 0712529B1 EP 94922979 A EP94922979 A EP 94922979A EP 94922979 A EP94922979 A EP 94922979A EP 0712529 B1 EP0712529 B1 EP 0712529B1
- Authority
- EP
- European Patent Office
- Prior art keywords
- phonemes
- access
- window
- text
- level
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Lifetime
Links
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L13/00—Speech synthesis; Text to speech systems
- G10L13/06—Elementary speech units used in speech synthesisers; Concatenation rules
- G10L13/07—Concatenation rules
Definitions
- This invention relates to synthetic speech and more particularly to a method of synthesising a digital waveform from signals representing phonemes.
- the starting point is an electronic representation of conventional typography, eg. a disk produced by a word processor.
- Many stages of processing are needed to produce synthesised speech from such a starting point but, as a preliminary part of the processing, it is usual to convert the conventional text into a phonetic text.
- the signals representing such a phonetic text will be called "phonemes".
- this invention addresses the problem of converting the signals representing phonemes into a digital waveform.
- the digital waveforms are commonplace in audio technology and digital-to-analogue converters and loud speakers are well known devices which enable digital waveforms to be converted into acoustic waveforms.
- This document describes a method of synthesising speech using synthesis units (allophones) which are determined and generated automatically using a labelled natural speech database.
- synthesis units allophones
- optimum synthesis units are selected on the basis of a "context matching score" against a given phoneme string.
- concatenating these units a sequence of spectral parameters is obtained.
- a waveform is obtained from these parameters and, since each unit represents precisely the appropriate coarticulatory phenomena, no smoothing or interpolation techniques are required.
- This invention uses a linked database to convert strings of phonemes into digital waveform but it also takes into account the context of the selected phoneme string.
- This invention also comprises a novel form of database which facilitates the taking into account of the context and the invention also includes the method whereby the preferred database strings are selected from alternatives stored therein.
- the method of the invention converts input signals representing a text expressed in phonemes into a digital waveform which is ultimately converted into an acoustic wave. Before its conversion, the initial digital waveform may be further processed in accordance with methods which will be familiar to persons skilled in the art.
- the phoneme set used in the preferred embodiment conform to the SAMP-PA (Speech Assessment Methologies - Phonetic Alphabet) simple set number 6. It is to be understood that the method of the invention is carried out in electronic equipment and the phonemes are provided in the form of signals so that the method corresponds to the converting of an input waveform into an output waveform.
- SAMP-PA Sound Assessment Methologies - Phonetic Alphabet
- the preferred embodiment of the invention converts waveform representing strings of one, two or three phonemes into digital waveform but it always operates on strings of five phonemes so that at least one preceding and at least one following phoneme is taken into account. This has the effect that, when alternative strings of five phonemes are available, the "best" context is selected.
- this invention makes particular use of a string of five phonemes and this string will hereinafter be called a "context window” and the five phonemes which constitute the "context window” will be identified as P1, P2, P3, P4 and P5 in sequence. It is a key feature of this invention that a "data context window” being five consecutive phonemes from the input signal is matched with an "access context window” being a sequence of five consecutive phonemes contained in the database.
- variable length strings are converted into digital waveform.
- the context of the selected strings is not taken into account.
- Each phoneme comprised in a selected string is, of course, in context with all the other phonemes of the string but the context of the string as a whole is not taken into account.
- This invention not only takes into account the contexts within the selected string but it also selects a best matching string from the strings available in the database. This specification will now describe important integers of preferred embodiment namely:-
- This invention selects from alternative context windows on the basis of a "best" match between the input context window and the various stored context windows. Since there are many, e. g. 10 8 or 10 10, possible contexts windows (of 5 phonemes each) it is not possible to store all of them, i.e. the database will lack some of the possible context windows. If all possible context windows were stored it would not be necessary to define a "best" match since an exact correspondence would always be available. However, each individual phoneme should be included in the database and it is always possible to achieve an exact match for at least one phoneme, in the preferred embodiment it is always possible to match exactly P3 of the data context window with P3 of the stored context window but, in general, further exact matches may not be possible.
- This invention defines a correlation parameter between two phonemes as follows. Corresponding to each phoneme there is a type-vector which consists of an ordered list of co-efficients. Each of these co-efficients represents a feature of its phoneme, e.g. whether its phoneme is voiced or unvoiced or whether or not its phoneme is a silibant, a plosive or a labil. It is also desirable to include locational features, eg whether or not the phoneme is in a stressed or unstressed syllable.
- the type vector uniquely characterises its phoneme and two phonemes can be compared by comparing their type-vectors co-efficient by co-efficient; e.g. by using an exclusive-or gate (which is sometimes called an equivalence gate).
- the number of matchings is one way of defining the correlation parameter. If desired this can be converted to a percentage by dividing by the maximum possible value of the parameter and multiplying by 100.
- a mis-match parameter can be defined e.g. by counting the numberof discrepancies in the two type vectors. It will be appreciated that selecting an "best" match is equivalent to selecting a lowest mis-match.
- the primary definition relates to the correlation parameter of a pair of phonemes.
- the correlation parameter of a string is obtained by summing or averaging the parameters of the corresponding pairs in the two strings. Weighted averages can be utilised where appropriate.
- the database is based on an extended passage of the selected language, eg English (although the information content of the passage is not important).
- a suitable passage lasts about two or three minutes and it contains about 1000-1500 phonemes.
- the precise nature of the extended passage is not particularly important although it must contain every phoneme and it should contain every phoneme in a variety of contexts.
- the extended passage can be stored in two different formats.
- First the extended passage can be expressed in phonemes to provide the access section of a linked database. More specifically, the phonemes representing the extended passage are divided into context windows each of which contains 5 phonemes.
- the method of the invention comprises obtaining best matches for the data context windows with the stored context windows just identified.
- the extended passage can also be provided in the form of a digitised wave form. As would be expected, this is achieved by having a reader or reciter speak the extended passage into a microphone so as to make a digital recording using well established technology. Any point in the digital recording can be defined by a parameter, e.g. by the time from the start. Analysing the recording establishes values for the time-parameter corresponding to the break between each pair of phonemes in the equivalent text. This arrangement permits phoneme-to-waveform conversion for any included string by establishing the starting value of the time-parameter corresponding to the first phoneme of the string and the finishing value for the time-parameter corresponding to the last phoneme of the string and retrieving the equivalent portion of database, ie the specified digital waveform. Specifically a conversion for any string of one, two or three phonemes can be achieved.
- the phoneme version of the extended text is stored in the form of context windows each of five phonemes. This is most suitably achieved by storing the phonemes in a tree which has three hierarchical levels.
- the first level of the hierarchy is defined by phoneme P3 of each window.
- the effect is that every phoneme gives direct access to a subset of the context windows ie. the totality of context windows is divided into subsets and each subset has the same value of P3.
- the next level of the tree is defined by phonemes P2 and P4 and, since this selection is made from the subsets defined above, the effect is that the totality of context windows is further divided into smaller subsets each of which is defined by having phonemes P2, P3 and P4 in common. (There are approximately half a million subsets but most of them will be empty because the relevant sequence P2, P3, P4 does not occur in the extended text). Empty subsets are not recorded at all so that the database remains of manageable size. Nevertheless it is true that for each triple sequence P2, P3, P4 which occurs in the extended text there will be a subset recorded in the second level of the database under P2, P4 which level will also have been indexed at the first level under P3.
- the second level gives access to a third level which contains subsets having P2, P3 and P4 as exact matches and it contains all the values of P1 and P5 corresponding to these triples. Best matches for data P1 and P5 are selected. This selection completely identifies one of the context windows contained in the extended text and it provides access to time-parameters of said window. Specifically it provides start and finish time-parameters for up to four different strings as follows:-
- the database provides beginning and ending values of the time-parameter corresponding to each one of the selected strings (a) - (d).
- the time-parameter defines the relevant portion of a digital wave form so that the equivalent wave form is selected.
- item (d) will be offered if it is contained in the database; in this case items (a), (b), and (c) are all embedded in the selected (d) and they are, therefore, available as alternatives. If item (d) is not contained in the database then, clearly, this option cannot be offered.
- the preferred embodiment is based on a context window which is five phonemes long. However the full string of five phonemes is never selected. Even if, fortuitously, the input text contains a string of five found in the database only the triple string P2, P3, P4 will be used. This emphasises that the important feature of the invention is the selection of a string from a context and, therefore, the invention selects the "best" context window of five phonemes and only uses a portion thereof in order to ensure that all selected strings are based upon a context.
- the selected data phoneme is not utilised in isolation but as part of its context window. More precisely the selected data phoneme becomes phoneme P3 of a data window with its two predecessors and two successors being selected to provide the five phonemes of the relevant context window.
- the database described above is searched for this context window; since it is unlikely that the exact window will be located, the search is for the best fitting of the stored context windows.
- the first step of the search involves accessing the tree described above using phoneme P3 as the indexing element. As explained above this gives immediate access to a subset of the stored context windows. More specifically, accessing level one by phoneme P3 gives access to a list of phoneme pairs which correspond to possible values of P2 and P4 of the data context-window. The best pair is selected according to the following four criteria.
- Second criterion In the absence of a triple match a left pair will be selected if it occurs. The left-hand match is selected when an exact match for P2 is found and, if alternatives offer, the P4 which has the highest correlation parameter will be selected to give access to level 3 of the tree.
- the third criterion is similar to the second except that it is a right-hand pair depending upon an exact match being discovered for P4. In this case access to level 3 is given by the P2 value which provides the highest correlation parameter.
- Criterion four occurs when there is no match for either P2 or P3 in which the case the pair P2, P4 with the highest average correlation parameter is selected as the basis of access to level 3.
- criterion 1 succeeds, then it will be possible to take as alternatives a left-hand pair, a right-hand pair and a single value in accordance with criterion 2, 3 and 4.
- criterion 1 fails, it is still possible that a left-hand pair will be found by criterion 2 and it is even possible that, simultaneously, a right-hand pair will be found by criterion 3. However because criterion 1 has failed they will be selected from different parts of the database and they will give access to different parts of the tree at level 3.
- criterion 4 will only be accepted when criterion 1, 2 and 3 have all failed and it follows that the phoneme P3 cannot be found in triples or pairings when used in other context windows.
- the selection of a context window gives rise to either one or two areas of the third level of the tree.
- the third level may contain several pairings for phonemes 1 and 5 of the data context window.
- the pair with the best average correlation parameter is selected as the context window in the access portion of the database.
- this context window is converted to digital wave form using the time-parameter.
- the preferred method of making the reduction is carried out by processing a short segment of input text, eg. a segment which begins and ends with a silence. Provided it is not too long a sentence constitutes a suitable segment. If a sentence is very long, e.g. more than thirty words, it usually contains one or more embedded silences, eg between clauses or other sub-units. In the case of long sentences such sub-units are suitable for use as the segments.
- each phoneme has one, and only one, conversion.
- the input text will have been divided into sub-strings of 1, 2 or 3 phonemes matching the database and the beginning and ending values for the selected streams will therefore be established.
- the output portion of the database takes the form of a digitised waveform and the parameters which have been established define segments of this waveform. Therefore the designated segments are selected and abutted to produce the digital waveform corresponding to the input text. This completes the requirement of the invention.
- this can be provided as audible output using conventional digital to analogue conversion techniques and conventional loudspeakers. If desired, the primary digital waveform can be enhanced using techniques known to those skilled in the art.
- the speech engine comprises primary processor 11 which is adapted to accept text in grapnemes and to produce therefrom an equivalent text in phonemes.
- This text is passed to converter 12 which is operatively associated with a database 13 in accordance with the invention.
- Converter 12 matches segments of the phoneme text with segments stored in the access portion of database 13. Thus segments of digital waveform are retrieved and these are assembled into extended portions of digital waveform corresponding to extended portions of the original input.
- the speech engine is connected to receive its input from an external database 16 which holds texts in conventional orthography.
- External database 16 is conveniently operated by keyboard 17 to select a text stored in database 16. This text is provided into the primarv processor 11 and it appears at the output port 15 as an analogue waveform.
- FIG 2 shows a speech engine as illustrated in Figure 1 attached to a public access telephone network.
- a conventional speech telephone 20 is connected to a station 22 via a switched access network 21.
- Station 22 includes a speech engine as shown in Figure 1 and the output port 15 is connected to the network so that the information available in the external database 16 can be provided, as an analogue acoustic waveform, to the telephone 20.
- the keypad (used for dialling) of the telephone 20 can be used as the keyboard 17 of the external database 16 (in which case the external database 16 preferably contains instructions which can be read by the speech engine).
- the external database 16 preferably contains instructions which can be read by the speech engine.
Landscapes
- Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Electrophonic Musical Instruments (AREA)
- Document Processing Apparatus (AREA)
Description
Claims (8)
- A method of converting an input signal into an output signal, wherein said input signal represents a text in phonemes and said output signal is a digital waveform convertible to an acoustic waveform corresponding to said text, wherein said method makes use of a two-part database having an access section linked to an output section, characterised in that said access section defines access windows each of which corresponds to a string of phonemes and said output section contains digital waveforms corresponding to the access windows; wherein said method comprises comparing windows of said input signal with the access windows to select, in each case, the access window which provides the best match including an exact match for at least one internal phoneme and discarding at least the first and last phonemes of said best match to identify a shorter string of phonemes which is an exact match for a portion of said input signal, retrieving from the output section the digital waveform corresponding to the selected exact match and thereafter joining together the selected portions of the digital waveform to produce the output signal.
- A method according to claim 1, in which the access section is based on an extended text in phonemes and each access window corresponds to a string of phonemes contained in said extended text; the output section contains an extended digital waveform corresponding to the extended phoneme text of the access section; and the portion retrieved from the output section is the segment of the extended digital waveform which corresponds to the exact match.
- A method according to either claim 1 or claim 2, which method includes forming a best match for a window of five phonemes of said input signal discarding at least the first and last phonemes of said best match to identify an exact match for a string of one, two or three phonemes.
- A method according to claim 3, in which the input section of the database is organised into three hierarchical levels; namely(i) a top level containing single phonemes corresponding to the central phoneme of a window;(ii) a second level which contains the equivalents of the second and fourth phonemes if a window; and(iii) a lowest level which contains the equivalents of the first and fifth phonemes of a window;
- A method according anyone of the preceding claims, in which the digital output is converted into an analogue signal.
- A database component for use in speech engine, said database having an access section containing signals representing phonemes linked to an output section containing digital waveform, characterised in that said access section is based on an extended text divided into access windows each of which contains five phonemes and that said output section contains an extended digital waveform corresponding to the extended phoneme text of the access section, wherein the access section is organised into three hierarchical levels; namely(i) a top level containing single phonemes corresponding to the central phonemes of an access window;(ii) a second level which contains the equivalents of the second and fourth phonemes of an access window identified in the top level; and(iii) a lowest level which contain the equivalents of the first and fifth phonemes of an access window identified in the second level.
- A speech engine which comprises a primary processor (11) for converting a text in graphemes into an equivalent text in phonemes and a converter (12) for converting said text in phonemes into a digital waveform, characterised in that the converter (12) includes a database (13) according to claim 6.
- A telephone network which includes a speech engine according to claim 7, said speech engine being connected to the network for the transmission of the output of the speech engine to a remote location.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP94922979A EP0712529B1 (en) | 1993-08-04 | 1994-08-01 | Synthesising speech by converting phonemes to digital waveforms |
Applications Claiming Priority (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP93306219 | 1993-08-04 | ||
EP93306219 | 1993-08-04 | ||
US16699893A | 1993-12-16 | 1993-12-16 | |
EP94922979A EP0712529B1 (en) | 1993-08-04 | 1994-08-01 | Synthesising speech by converting phonemes to digital waveforms |
PCT/GB1994/001688 WO1995004988A1 (en) | 1993-08-04 | 1994-08-01 | Synthesising speech by converting phonemes to digital waveforms |
Publications (2)
Publication Number | Publication Date |
---|---|
EP0712529A1 EP0712529A1 (en) | 1996-05-22 |
EP0712529B1 true EP0712529B1 (en) | 1998-06-24 |
Family
ID=26134418
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP94922979A Expired - Lifetime EP0712529B1 (en) | 1993-08-04 | 1994-08-01 | Synthesising speech by converting phonemes to digital waveforms |
Country Status (10)
Country | Link |
---|---|
EP (1) | EP0712529B1 (en) |
JP (1) | JPH09504117A (en) |
AU (1) | AU674246B2 (en) |
CA (1) | CA2166883C (en) |
DE (1) | DE69411275T2 (en) |
DK (1) | DK0712529T3 (en) |
ES (1) | ES2118424T3 (en) |
HK (1) | HK1014431A1 (en) |
SG (1) | SG52347A1 (en) |
WO (1) | WO1995004988A1 (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3884856B2 (en) * | 1998-03-09 | 2007-02-21 | キヤノン株式会社 | Data generation apparatus for speech synthesis, speech synthesis apparatus and method thereof, and computer-readable memory |
US7805307B2 (en) | 2003-09-30 | 2010-09-28 | Sharp Laboratories Of America, Inc. | Text to speech conversion system |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5153913A (en) * | 1987-10-09 | 1992-10-06 | Sound Entertainment, Inc. | Generating speech from digitally stored coarticulated speech segments |
AU632867B2 (en) * | 1989-11-20 | 1993-01-14 | Digital Equipment Corporation | Text-to-speech system having a lexicon residing on the host processor |
SE516521C2 (en) * | 1993-11-25 | 2002-01-22 | Telia Ab | Device and method of speech synthesis |
-
1994
- 1994-08-01 AU AU72701/94A patent/AU674246B2/en not_active Ceased
- 1994-08-01 WO PCT/GB1994/001688 patent/WO1995004988A1/en active IP Right Grant
- 1994-08-01 SG SG1996003262A patent/SG52347A1/en unknown
- 1994-08-01 EP EP94922979A patent/EP0712529B1/en not_active Expired - Lifetime
- 1994-08-01 JP JP7506281A patent/JPH09504117A/en active Pending
- 1994-08-01 DE DE69411275T patent/DE69411275T2/en not_active Expired - Lifetime
- 1994-08-01 DK DK94922979T patent/DK0712529T3/en active
- 1994-08-01 ES ES94922979T patent/ES2118424T3/en not_active Expired - Lifetime
- 1994-08-01 CA CA002166883A patent/CA2166883C/en not_active Expired - Fee Related
-
1998
- 1998-12-24 HK HK98115732A patent/HK1014431A1/en not_active IP Right Cessation
Also Published As
Publication number | Publication date |
---|---|
AU7270194A (en) | 1995-02-28 |
DK0712529T3 (en) | 1999-04-06 |
CA2166883C (en) | 1999-09-21 |
DE69411275T2 (en) | 1998-11-05 |
JPH09504117A (en) | 1997-04-22 |
DE69411275D1 (en) | 1998-07-30 |
SG52347A1 (en) | 1998-09-28 |
ES2118424T3 (en) | 1998-09-16 |
HK1014431A1 (en) | 1999-09-24 |
WO1995004988A1 (en) | 1995-02-16 |
AU674246B2 (en) | 1996-12-12 |
EP0712529A1 (en) | 1996-05-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6505158B1 (en) | Synthesis-based pre-selection of suitable units for concatenative speech | |
CA2351988C (en) | Method and system for preselection of suitable units for concatenative speech | |
US6094633A (en) | Grapheme to phoneme module for synthesizing speech alternately using pairs of four related data bases | |
US20090112587A1 (en) | System and method for generating a phrase pronunciation | |
EP1668628A1 (en) | Method for synthesizing speech | |
JP2002530703A (en) | Speech synthesis using concatenation of speech waveforms | |
CA2222582C (en) | Speech synthesizer having an acoustic element database | |
WO2004066271A1 (en) | Speech synthesizing apparatus, speech synthesizing method, and speech synthesizing system | |
CN110459202A (en) | A kind of prosodic labeling method, apparatus, equipment, medium | |
US5970454A (en) | Synthesizing speech by converting phonemes to digital waveforms | |
US5987412A (en) | Synthesising speech by converting phonemes to digital waveforms | |
EP0712529B1 (en) | Synthesising speech by converting phonemes to digital waveforms | |
US6502074B1 (en) | Synthesising speech by converting phonemes to digital waveforms | |
JPH0887297A (en) | Speech synthesis system | |
KR100259777B1 (en) | Optimal synthesis unit selection method in text-to-speech system | |
JP3576066B2 (en) | Speech synthesis system and speech synthesis method | |
JP3626398B2 (en) | Text-to-speech synthesizer, text-to-speech synthesis method, and recording medium recording the method | |
WO2000036591A1 (en) | Speech operated automatic inquiry system | |
CN1629933B (en) | Device, method and converter for speech synthesis | |
JP3302874B2 (en) | Voice synthesis method | |
JP2001249678A (en) | Device and method for outputting voice, and recording medium with program for outputting voice | |
JPH0573092A (en) | Speech synthesis system | |
Moise et al. | An Automated System for the Vocal Synthesis of Text Files in Romanian | |
CA2498736A1 (en) | System and method for generating a phrase pronunciation | |
JPH11305787A (en) | Voice synthesizing device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 19960118 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): BE CH DE DK ES FR GB IT LI NL SE |
|
17Q | First examination report despatched |
Effective date: 19961022 |
|
GRAG | Despatch of communication of intention to grant |
Free format text: ORIGINAL CODE: EPIDOS AGRA |
|
GRAG | Despatch of communication of intention to grant |
Free format text: ORIGINAL CODE: EPIDOS AGRA |
|
GRAG | Despatch of communication of intention to grant |
Free format text: ORIGINAL CODE: EPIDOS AGRA |
|
GRAH | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOS IGRA |
|
GRAH | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOS IGRA |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): BE CH DE DK ES FR GB IT LI NL SE |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: EP |
|
ITF | It: translation for a ep patent filed | ||
REG | Reference to a national code |
Ref country code: CH Ref legal event code: NV Representative=s name: JACOBACCI & PERANI S.A. |
|
REF | Corresponds to: |
Ref document number: 69411275 Country of ref document: DE Date of ref document: 19980730 |
|
REG | Reference to a national code |
Ref country code: ES Ref legal event code: FG2A Ref document number: 2118424 Country of ref document: ES Kind code of ref document: T3 |
|
ET | Fr: translation filed | ||
REG | Reference to a national code |
Ref country code: DK Ref legal event code: T3 |
|
PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
26N | No opposition filed | ||
REG | Reference to a national code |
Ref country code: GB Ref legal event code: IF02 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: NL Payment date: 20080718 Year of fee payment: 15 Ref country code: DK Payment date: 20080714 Year of fee payment: 15 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: BE Payment date: 20080801 Year of fee payment: 15 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: SE Payment date: 20090813 Year of fee payment: 16 |
|
BERE | Be: lapsed |
Owner name: BRITISH *TELECOMMUNICATIONS P.L.C. Effective date: 20090831 |
|
REG | Reference to a national code |
Ref country code: NL Ref legal event code: V1 Effective date: 20100301 |
|
REG | Reference to a national code |
Ref country code: DK Ref legal event code: EBP |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: BE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20090831 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: NL Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20100301 Ref country code: DK Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20090831 |
|
EUG | Se: european patent has lapsed | ||
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: CH Payment date: 20110824 Year of fee payment: 18 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: FR Payment date: 20110901 Year of fee payment: 18 Ref country code: ES Payment date: 20110812 Year of fee payment: 18 Ref country code: DE Payment date: 20110823 Year of fee payment: 18 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: IT Payment date: 20110824 Year of fee payment: 18 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20100802 |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: PL |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: CH Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20120831 Ref country code: LI Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20120831 |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: ST Effective date: 20130430 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IT Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20120801 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: DE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20130301 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: FR Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20120831 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R119 Ref document number: 69411275 Country of ref document: DE Effective date: 20130301 |
|
REG | Reference to a national code |
Ref country code: ES Ref legal event code: FD2A Effective date: 20131021 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: ES Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20120802 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: GB Payment date: 20130821 Year of fee payment: 20 |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: PE20 Expiry date: 20140731 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: GB Free format text: LAPSE BECAUSE OF EXPIRATION OF PROTECTION Effective date: 20140731 |