Next: , Up: Building models from databases


26.1 Labelling databases

In order for Festival to use a database it is most useful to build utterance structures for each utterance in the database. As discussed earlier, utterance structures contain relations of items. Given such a structure for each utterance in a database we can easily read in the utterance representation and access it, dumping information in a normalised way allowing for easy building and testing of models.

Of course the level of labelling that exists, or that you are willing to do by hand or using some automatic tool, for a particular database will vary. For many purposes you will at least need phonetic labelling. Hand labelled data is still better than auto-labelled data, but that could change. The size and consistency of the data is important too.

For this discussion we will assume labels for: segments, syllables, words, phrases, intonation events, pitch targets. Some of these can be derived, some need to be labelled. This would not fail with less labelling but of course you wouldn't be able to extract as much information from the result.

In our databases these labels are in Entropic's Xlabel format, though it is fairly easy to convert any reasonable format.

Segment
These give phoneme labels for files. Note the these labels must be members of the phoneset that you will be using for this database. Often phone label files may contain extra labels (e.g. beginning and end silence) which are not really part of the phoneset. You should remove (or re-label) these phones accordingly.
Word
Again these will need to be provided. The end of the word should come at the last phone in the word (or just after). Pauses/silences should not be part of the word.
Syllable
There is a chance these can be automatically generated from Word and Segment files given a lexicon. Ideally these should include lexical stress.
IntEvent
These should ideally mark accent/boundary tone type for each syllable, but this almost definitely requires hand-labelling. Also given that hand-labelling of accent type is harder and not as accurate, it is arguable that anything other than accented vs. non-accented can be used reliably.
Phrase
This could just mark the last non-silence phone in each utterance, or before any silence phones in the whole utterance.
Target
This can be automatically derived from an F0 file and the Segment files. A marking of the mean F0 in each voiced phone seem to give adequate results.
Once these files are created an utterance file can be automatically created from the above data. Note it is pretty easy to get the streams right but getting the relations between the streams is much harder. Firstly labelling is rarely accurate and small windows of error must be allowed to ensure things line up properly. The second problem is that some label files identify point type information (IntEvent and Target) while others identify segments (e.g. Segment, Words etc.). Relations have to know this in order to get it right. For example is not right for all syllables between two IntEvents to be linked to the IntEvent, only to the Syllable the IntEvent is within.

The script festival/examples/make_utts is an example Festival script which automatically builds the utterance files from the above labelled files.

The script, by default assumes, a hierarchy in an database directory of the following form. Under a directory festival/ where all festival specific database ifnromation can be kept, a directory relations/ contains a subdirectory for each basic relation (e.g. Segment/, Syllable/, etc.) Each of which contains the basic label files for that relation.

The following command will build a set of utterance structures (including building hte relations that link between these basic relations).

     make_utts -phoneset radio festival/relation/Segment/*.Segment

This will create utterances in festival/utts/. There are a number of options to make_utts use -h to find them. The -eval option allows extra scheme code to be loaded which may be called by the utterance building process. The function make_utts_user_function will be called on all utterance created. Redefining that in database specific loaded code will allow database specific fixed to the utterance.