How to compose a sound |
|
Important note
The SoundComposerMan class has become partially obsolete after the introduction of the TracksBoard class which allows a better editing experience.
For details about the use of the TracksBoard refer to the How to use the TracksBoard to visually compose songs tutorial. For further details about methods of the TracksBoard refer to the TracksBoard class section.
|
Audio Sound Editor for .NET allows composing new sound and/or music files by mixing together audio data incoming from several sources; an important concept about this feature is the fact that all sound items composing the session are layered, meaning that they can be added, removed and modified before composing the final mix and exporting the same into a destination file on disk.
Main sources of audio data can be the following:
• | The internal sound generator which allows creating from scratch several kinds of wave tones, noises and DTMF tones |
• | The Microsoft Speech API which allows creating audio data from a string of text or from a text file through synthesized voices |
• | Sound files stored inside disk files or inside memory buffers: these sound files can be in several audio formats as seen inside the LoadSound and LoadSoundFromRawFile methods |
• | Another instance of the editor component |
• | An instance of the Audio Sound Recorder for .NET component |
The sound composer can create mono, stereo and multi-channel (up to 7.1) sessions and allows, for each item added to the session, to define the destination channel and the offset in milliseconds respect to the beginning of the final audio stream.
The main actor in sound composition is the internal sound composer object, implemented through the SoundComposerMan class and exposed through the SoundComposer property; there are three main steps needed to perform a sound composition:
• | Initializing the sound composer's session by defining the characteristics of the audio stream that will be composed, mainly sample rate and number of audio channels, through the SoundComposer.SessionInit method. |
• | Once the session has been initialized, we can start adding items through a set of methods that will be mentioned on the next paragraph; each item added to the session is identified through a unique identifier: this unique identifier can be used in case you should need to perform some modification to the item itself, for example changing its amplitude, its offset, its channel and so on. |
• | At this point, when all of the needed items have been added and placed on the correct offsets and audio channels, we can start the sound composition by invoking the SoundComposer.SessionComposeItems method: during this phase, which could require a certain amount of seconds depending upon the size/duration of each item, the container application would be notified about the composition advancement through the SoundComposerStarted, SoundComposerPerc and SoundComposerDone events; after this step the sound editor will contain the mixing of all of the items previously added to the session; this mixing will behave exactly as a normal sound loaded through the LoadSound method so you could perform playback, further editing and exporting on a destination audio file on disk through the ExportToFile method. |
It must be remarked that, after sound composition, the sound composer session is still alive so, if you should not be satisfied with the final result, you may still modify existing items, for example you could decide to move the offset of one item or reduce the amplitude of another item: once all modifications have been performed, a new call to the SoundComposer.SessionComposeItems method would discard the previous mixing and would fill the editor contents with the new one.
In order to totally remove a sound composer session and discard all of the previously added items, you would need to call the CloseSound method.
As mentioned on the second point above, new items can be added to the sound composer session through a set of specific methods which vary depending upon the specific type of item:
Audio files or audio data stored inside a different instance of the component
Audio generated from Microsoft Speech API
Pure, monaural and binaural wave tones
After adding an item to the session, you can still perform changes to the item or eventually obtaining related information by leveraging the following set of methods:
- SoundComposer.ItemAmplitudeGet to obtain the amplitude of the item
- SoundComposer.ItemAmplitudeSet to modify the amplitude of the item
- SoundComposer.ItemChannelGet to obtain the channel of of the audio stream that will reproduce the item
- SoundComposer.ItemChannelSet to modify the channel of of the audio stream that will reproduce the item
- SoundComposer.ItemRemove to remove the item from the sound composition
- SoundComposer.ItemDurationGet to obtain the duration, expressed in milliseconds, of the item
- SoundComposer.ItemDurationSet to modify the duration, expressed in milliseconds, of the item
- SoundComposer.ItemOffsetGet to obtain the offset, expressed in milliseconds, of the item respect to the beginning of the audio stream
- SoundComposer.ItemOffsetSet to modify the offset, expressed in milliseconds, of the item respect to the beginning of the audio stream
- SoundComposer.ItemTypeGet to obtain the item's type
- SoundComposer.ItemFriendlyNameGet to obtain the friendly name of the item
- SoundComposer.ItemFriendlyNameSet to modify the friendly name of the item
Other methods, specific to each kind of item, are described inside the various sections below.
Adding audio files or audio data stored inside a different instance of the component
The sound composer allows adding sound files stored in different audio formats and media:
• | Sound files stored on the local disk through the SoundComposer.ItemSoundFileAdd method or through the SoundComposer.ItemSoundFileRawAdd method: accepted audio formats are the same supported by the LoadSound and LoadSoundFromRawFile methods. |
• | Sound files stored inside a memory buffer through the SoundComposer.ItemSoundFileMemoryAdd method or through the SoundComposer.ItemSoundFileMemoryRawAdd method: accepted audio formats are the same supported by the LoadSoundFromMemory and LoadSoundFromRawMemory methods. |
• | Audio data stored inside a different instance of the component through the SoundComposer.ItemSoundFileFromEditorAdd method. |
• | Audio data stored inside an instance of the Audio Sound Recorder for .NET component through the SoundComposer.ItemSoundFileFromRecorderAdd method. |
Once the sound file has been added to the session, you can modify some of its settings through the following set of methods:
- SoundComposer.ItemSoundFileDownmixToMonoGet to obtain the downmix to mono setting of the item
- SoundComposer.ItemSoundFileDownmixToMonoSet to modify the downmix to mono setting of the item
- SoundComposer.ItemSoundFileDurationGet to obtain the duration of the item
- SoundComposer.ItemSoundFileDurationStretch to modify the duration of the item through a tempo change
- SoundComposer.ItemSoundFileOriginalChannelsGet to obtain the original number of channels of the item
- SoundComposer.ItemSoundFileLoadRangeGet to obtain the loading range of the item
- SoundComposer.ItemSoundFileLoadRangeSet to modify the loading range of the item
- SoundComposer.ItemSoundFileLoopGet to obtain the number of loops of the item
- SoundComposer.ItemSoundFileLoopSet to modify the number of loops of the item
- SoundComposer.ItemSoundFileVolumeFadingGet to obtain the fading setting of the item
- SoundComposer.ItemSoundFileVolumeFadingRemove to remove the fading setting of the item
- SoundComposer.ItemSoundFileVolumeFadingSet to modify the fading setting of the item
- SoundComposer.ItemSoundFileVolumeSlidingAdd to add a new volume sliding to the item
- SoundComposer.ItemSoundFileVolumeSlidingGet to obtain the settings of a specific volume sliding of the item
- SoundComposer.ItemSoundFileVolumeSlidingNumGet to obtain the number of volume sliding of the item
- SoundComposer.ItemSoundFileVolumeSlidingRemove to remove a specific volume sliding of the item
- SoundComposer.ItemSoundFileVolumeSlidingUniqueIdGet to obtain the unique identifier of a specific volume sliding of the item
As you can see there is a consistent number of methods that can be used to modify how a specific sound file will be mixed to the audio stream.
Limiting the loading range of the sound file can be achieved through the SoundComposer.ItemSoundFileLoadRangeSet method; imagine a sound file whose duration is 3 minutes: with this method you could for example limit the loaded sound duration to 30 seconds starting from the first minute of the sound.
If you should need to apply a loop, for example to a specific drum beat that you want to be repeated throughout a certain duration of the final audio stream, you could set the desired number of loops through the SoundComposer.ItemSoundFileLoopSet method.
A special feature available for sound files items is the possibility to apply volume modifications in two ways:
- At the beginning and at the end of the sound file you can apply two separate volume fading through two separate calls to the SoundComposer.ItemSoundFileVolumeFadingSet method, the first one for the fade-in and the second one for the fade-out.
- In any other position of the sound file you can apply one or more linear volume sliding through the SoundComposer.ItemSoundFileVolumeSlidingAdd method: for example, if you need to apply a spoken voice while the sound is being played, you may want to reduce the volume while the spoken voice is being played and raise it back to its original volume level when the spoken voice is completed.
As a final but important feature, you may need to reproduce a sound file whose duration is 1 minute long into a portion of the final audio stream being 55 seconds long without loosing the original pitch of the music or of a spoken voice: this can be achieved through the SoundComposer.ItemSoundFileDurationStretch method which allows shrinking or enlarging the original sound by modifying its "tempo".
Let's see a small code snippet where we create a session having a stereo audio stream; on this stream we will perform the following actions:
• | add two stereo sound files (myfile1.mp3 and myfile2.wma) to the session |
• | the loading range of both sound files is limited to 30 seconds starting from second 10 of each sound file |
• | the first file is loaded at offset 0 on the final audio stream |
• | the second file is loaded at offset 28000 (28 seconds) on the final audio stream |
• | a fade-in of 2 seconds is applied at the beginning of each sound file |
• | a fade-out of 3 seconds is applied at the end of each sound file |
as you may see, the two sound file will overlap for a small chunk of 2 seconds, during fade-in/fade-out, on the final audio stream:
Visual Basic.NET |
' prepare the audio stream to compose (44100, stereo) audioSoundEditor1.SoundComposer.SessionInit (44100, 2)
Dim nUniqueIdFile1 As Int32 = 0 Dim nUniqueIdFile2 As Int32 = 0
' add the two sound files audioSoundEditor1.SoundComposer.ItemSoundFileAdd ("First sound", 0, _ "c:\myfolder\myfile1.mp3", False, 1, 0, nUniqueIdFile1) audioSoundEditor1.SoundComposer.ItemSoundFileAdd ("Second sound", 0, _ "c:\myfolder\myfile2.wma", False, 1, 28000, nUniqueIdFile2)
' limit the loading range of the two sound files from second 10 to second 40 (30 seconds) audioSoundEditor1.SoundComposer.ItemSoundFileLoadRangeSet (nUniqueIdFile1, 10000, 40000) audioSoundEditor1.SoundComposer.ItemSoundFileLoadRangeSet (nUniqueIdFile2, 10000, 40000)
' add a linear 2 seconds fade-in to the two sound files audioSoundEditor1.SoundComposer.ItemSoundFileVolumeFadingSet (nUniqueIdFile1, True, 2000, _ enumVolumeCurves.VOLUME_CURVE_LINEAR, 0, 0, 0, 0) audioSoundEditor1.SoundComposer.ItemSoundFileVolumeFadingSet (nUniqueIdFile2, True, 2000, _ enumVolumeCurves.VOLUME_CURVE_LINEAR, 0, 0, 0, 0)
' add a linear 3 seconds fade-out to the two sound files audioSoundEditor1.SoundComposer.ItemSoundFileVolumeFadingSet (nUniqueIdFile1, False, 3000, _ enumVolumeCurves.VOLUME_CURVE_LINEAR, 0, 0, 0, 0) audioSoundEditor1.SoundComposer.ItemSoundFileVolumeFadingSet (nUniqueIdFile2, False, 3000, _ enumVolumeCurves.VOLUME_CURVE_LINEAR, 0, 0, 0, 0)
' generate the final audio stream audioSoundEditor1.SoundComposer.SessionComposeItems
|
Visual C# |
// prepare the audio stream to compose (44100, stereo) audioSoundEditor1.SoundComposer.SessionInit (44100, 2);
// add the two sound files and obtain respective unique identifiers Int32 nUniqueIdFile1 = 0; Int32 nUniqueIdFile2 = 0; audioSoundEditor1.SoundComposer.ItemSoundFileAdd ("First sound", 0, @"c:\myfolder\myfile1.mp3", false, 1, 0, ref nUniqueIdFile1); audioSoundEditor1.SoundComposer.ItemSoundFileAdd ("Second sound", 0, @"c:\myfolder\myfile2.wma", false, 1, 0, ref nUniqueIdFile2);
// limit the loading range of the two sound files from second 10 to second 40 (30 seconds) audioSoundEditor1.SoundComposer.ItemSoundFileLoadRangeSet (nUniqueIdFile1, 10000, 40000); audioSoundEditor1.SoundComposer.ItemSoundFileLoadRangeSet (nUniqueIdFile2, 10000, 40000);
// add a linear 2 seconds fade-in to the two sound files audioSoundEditor1.SoundComposer.ItemSoundFileVolumeFadingSet (nUniqueIdFile1, true, 2000, enumVolumeCurves.VOLUME_CURVE_LINEAR, 0, 0, 0, 0); audioSoundEditor1.SoundComposer.ItemSoundFileVolumeFadingSet (nUniqueIdFile2, true, 2000, enumVolumeCurves.VOLUME_CURVE_LINEAR, 0, 0, 0, 0);
// add a linear 3 seconds fade-out to the two sound files audioSoundEditor1.SoundComposer.ItemSoundFileVolumeFadingSet (nUniqueIdFile1, false, 3000, enumVolumeCurves.VOLUME_CURVE_LINEAR, 0, 0, 0, 0); audioSoundEditor1.SoundComposer.ItemSoundFileVolumeFadingSet (nUniqueIdFile2, false, 3000, enumVolumeCurves.VOLUME_CURVE_LINEAR, 0, 0, 0, 0);
// generate the final audio stream audioSoundEditor1.SoundComposer.SessionComposeItems ();
|
Now let's see another small code snippet where we create a session having again a stereo audio stream; on this stream we will perform the following actions:
• | add two stereo sound files (music.mp3 and voice.wma) to the session |
• | the music.mp3 file contains music only |
• | the voice.wma file contains a spoken voice |
• | the music.mp3 file, whose duration is 3 minutes, is loaded at offset 0 on the final audio stream |
• | the voice.wma file, whose duration is obtained programmatically, is loaded at offset 10000 (10 seconds) on the final audio stream: this means that the two files will overlap for the overall duration of the voice.wma file |
• | in order to avoid that the music file covers what is being said inside the voice file, the volume of the music.mp3 file is reduced with a small linear fade of 1 second down to 20% (amplitude 0.2) while the two files are overlapping; as soon as the voice.wma file is completed, the volume of the music.mp3 file is raised again to 100% (amplitude 1.0) with another small volume fade of 1 second. |
Visual Basic.NET |
' prepare the audio stream to compose (44100, stereo) audioSoundEditor1.SoundComposer.SessionInit (44100, 2)
' add the two sound files and obtain respective unique identifiers Dim nUniqueIdFile1 As Int32 = 0 Dim nUniqueIdFile2 As Int32 = 0 audioSoundEditor1.SoundComposer.ItemSoundFileAdd ("First sound", 0, _ "c:\myfolder\music.mp3", False, 1, 0, nUniqueIdFile1) audioSoundEditor1.SoundComposer.ItemSoundFileAdd ("Second sound", 0, _ "c:\myfolder\voice.wma", False, 1, 10000, nUniqueIdFile2)
' obtain the duration in milliseconds of the voice.wma file dim nDurationMs as Int32 = 0 audioSoundEditor1.SoundComposer.ItemDurationGet (nUniqueIdFile2, nDurationMs)
' slide the volume down for one second on the music.mp3 sound one second before the beginning of the voice.wma file Dim nDummyVolumeUniqueId as Int32 audioSoundEditor1.SoundComposer.ItemSoundFileVolumeSlidingAdd (nUniqueIdFile1, _ 9000, 1000, 1, 0.2, nDummyVolumeUniqueId)
' slide the volume up for one second on the music.mp3 sound at the end of the voice.wma file audioSoundEditor1.SoundComposer.ItemSoundFileVolumeSlidingAdd (nUniqueIdFile1, _ 10000+nDurationMs, 1000, 0.2, 1, nDummyVolumeUniqueId)
' generate the final audio stream audioSoundEditor1.SoundComposer.SessionComposeItems ()
|
Visual C# |
// prepare the audio stream to compose (44100, stereo) audioSoundEditor1.SoundComposer.SessionInit (44100, 2);
// add the two sound files and obtain respective unique identifiers Int32 nUniqueIdFile1 = 0; Int32 nUniqueIdFile2 = 0; audioSoundEditor1.SoundComposer.ItemSoundFileAdd ("First sound", 0, @"c:\myfolder\music.mp3", false, 1, 0, ref nUniqueIdFile1); audioSoundEditor1.SoundComposer.ItemSoundFileAdd ("Second sound", 0, @"c:\myfolder\voice.wma", false, 1, 10000, ref nUniqueIdFile2);
// obtain the duration in milliseconds of the voice.wma file Int32 nDurationMs = 0; audioSoundEditor1.SoundComposer.ItemDurationGet (nUniqueIdFile2, ref nDurationMs);
// slide the volume down for one second on the music.mp3 sound one second before the beginning of the voice.wma file long nDummyVolumeUniqueId; audioSoundEditor1.SoundComposer.ItemSoundFileVolumeSlidingAdd (nUniqueIdFile1, 9000, 1000, 1.0f, 0.2f, ref nDummyVolumeUniqueId);
// slide the volume up for one second on the music.mp3 sound at the end of the voice.wma file audioSoundEditor1.SoundComposer.ItemSoundFileVolumeSlidingAdd (nUniqueIdFile1, 10000+nDurationMs, 1000, 0.2f, 1.0f, ref nDummyVolumeUniqueId);
// generate the final audio stream audioSoundEditor1.SoundComposer.SessionComposeItems ();
|
As you can see, in order to keep the code simpler we have used a unique identifier variable for each volume sliding: in a real situation you may want to use separate unique identifier variables for each volume sliding.
Audio generated from Microsoft Speech API
Text to speech is the artificial production of human speech and is obtained by leveraging the Microsoft's Speech API installed on Windows systems. Text is converted into an audio stream containing spoken voice through the usage of voices installed inside the system: you can enumerate installed voices through the combination of the SpeechVoicesNumGet and SpeechVoiceAttributeGet methods; the sound composer can generate audio streams starting from a string of text, through the SoundComposer.ItemSpeechFromStringAdd method, or from a file containing text, through the SoundComposer.ItemSpeechFromFileAdd method; in both cases the provided text may eventually contain XML markups: see the MSDN documentation for a tutorial about XML markup syntax.
Once the text to speech has been added to the session, you can modify some of its settings through the following set of methods:
- SoundComposer.ItemSpeechFileSet to modify the absolute pathname of the file containing the text to speech (only if the item was added through the SoundComposer.ItemSpeechFromFileAdd method)
- SoundComposer.ItemSpeechStringSet to modify the string of the text to speech (only if the item was added through the SoundComposer.ItemSpeechFromStringAdd method)
- SoundComposer.ItemSpeechTextGet to obtain the current string of text to speech or the pathname of the file containing the text to speech
- SoundComposer.ItemSpeechVoiceGet to obtain the speaking voice
- SoundComposer.ItemSpeechVoiceSet to modify the speaking voice
Let's see a small code snippet where we create a session having a stereo audio stream; on this stream we will add two mono streams, one for each channel, containing speech generated from a string of text (in order to keep the code simpler we have used a unique identifier variable for each item, in a real situation you may want to use separate unique identifier variables for each item):
Visual Basic.NET |
' prepare the audio stream to compose (44100, stereo) audioSoundEditor1.SoundComposer.SessionInit (44100, 2)
Dim nDummyUniqueId As Int32 = 0 Dim nChannel As Integer = 0
' add the string of text on the two channels of the audio stream (channel 0 and 1) audioSoundEditor1.SoundComposer.ItemSpeechFromStringAdd ("", nChannel, _ "This is a string of text to speech", 0, True, 1, 0, nDummyUniqueId) audioSoundEditor1.SoundComposer.ItemSpeechFromStringAdd ("", nChannel+1, _ "This is a string of text to speech", 0, True, 1, 0, nDummyUniqueId)
' generate the final audio stream audioSoundEditor1.SoundComposer.SessionComposeItems ()
|
Visual C# |
// prepare the audio stream to compose (44100, stereo) audioSoundEditor1.SoundComposer.SessionInit (44100, 2);
Int32 nDummyUniqueId = 0; int nChannel = 0;
// add the string of text on the two channels of the audio stream (channel 0 and 1) audioSoundEditor1.SoundComposer.ItemSpeechFromStringAdd ("", nChannel, "This is a string of text to speech", 0, True, 1, 0, ref nDummyUniqueId); audioSoundEditor1.SoundComposer.ItemSpeechFromStringAdd ("", nChannel+1, "This is a string of text to speech", 0, True, 1, 0, ref nDummyUniqueId);
// generate the final audio stream audioSoundEditor1.SoundComposer.SessionComposeItems ();
|
Adding Pure, monaural and binaural wave tones
A pure wave tone is typically a tone with a sinusoidal waveform having an amplitude and a frequency. The component allows adding to the session, on a specific audio channel, sinusoidal, square, sawtooth and triangular waveforms through the SoundComposer.ItemWaveToneAdd method: wave tones are always mono, meaning that for each call the wave tone will be stored inside a specific audio channel of the audio stream; if for example you should need to create a stereo wave tone on the front speakers of the audio stream, you would need to call this method twice, once for channel 0 and once for channel 1.
Pure wave tones can be used to create something more complex like monaural and binaural wave tones; binaural wave tones are produced when two pure wave tones, having different frequencies, combine directly in the brain of the listener and are produced by sending two different pure wave tones, having different frequencies, to two different channels of the audio stream; monaural wave tones are produced when two or more pure wave tones, having different frequencies, combine digitally or naturally before the sounds reach the ears, as opposed to combining in the brain like binaural wave tones. The component allows creating composite wave tones with more than two pure wave tones combined together by adding, always on the same channel, two or more pure wave tones, each having its own frequency and amplitude.
Let's see a small code snippet where we create a session having a 4 channels audio stream; on the first stereo pair (channels 0 and 1) we will add a binaural wave tone while, on the second stereo pair (channels 2 and 3) we will add a monaural wave tone (in order to keep the code simpler we have used a unique identifier variable for each item, in a real situation you may want to use separate unique identifier variables for each item):
Visual Basic.NET |
' prepare the audio stream to compose (44100, 4 channels) audioSoundEditor1.SoundComposer.SessionInit (44100, 4)
Dim nDummyUniqueId As Int32 = 0
' create the binaural tone on the first channels pair (0 and 1) ' add the first 3 seconds long sine tone on left channel of the stream (400 hz on channel 0) Dim nChannel As Integer = 0 audioSoundEditor1.SoundComposer.ItemWaveToneAdd ("", nChannel, _ enumSoundGenWaveTypes.SOUNDGEN_WAVE_TYPE_SINE, 400, 1, 3000, 0, nDummyUniqueId)
' add the second 3 seconds long sine tone on right channel of the stream (480 hz on channel 1) audioSoundEditor1.SoundComposer.ItemWaveToneAdd ("", nChannel+1, _ enumSoundGenWaveTypes.SOUNDGEN_WAVE_TYPE_SINE, 480, 1, 3000, 0, nDummyUniqueId)
' create the monaural tone on the second channels pair (2 and 3) ' loop on the two channels Dim nChannel As Integer For nChannel = 2 To 3 ' add a 3 seconds long 400 hz sine tone on the current channel audioSoundEditor1.SoundComposer.ItemWaveToneAdd ("", nChannel, _ enumSoundGenWaveTypes.SOUNDGEN_WAVE_TYPE_SINE, 400, 1, 3000, 0, nDummyUniqueId)
' add a 3 seconds long 480 hz sine tone on the current channel audioSoundEditor1.SoundComposer.ItemWaveToneAdd ("", nChannel, _ enumSoundGenWaveTypes.SOUNDGEN_WAVE_TYPE_SINE, 480, 1, 3000, 0, nDummyUniqueId) Next nChannel
' generate the final audio stream audioSoundEditor1.SoundComposer.SessionComposeItems ()
|
Visual C# |
// prepare the audio stream to compose (44100, 4 channels) audioSoundEditor1.SoundComposer.SessionInit (44100, 4);
Int32 nDummyUniqueId = 0;
// create the binaural tone on the first channels pair (0 and 1) // add the first 3 seconds long sine tone on left channel of the stream (400 hz on channel 0) int nChannel = 0; audioSoundEditor1.SoundComposer.ItemWaveToneAdd ("", nChannel, enumSoundGenWaveTypes.SOUNDGEN_WAVE_TYPE_SINE, 400, 1.0f, 3000, 0, ref nDummyUniqueId);
// add the second 3 seconds long sine tone on right channel of the stream (480 hz on channel 1) audioSoundEditor1.SoundComposer.ItemWaveToneAdd ("", nChannel+1, _ enumSoundGenWaveTypes.SOUNDGEN_WAVE_TYPE_SINE, 480, 1.0f, 3000, 0, ref nDummyUniqueId);
// create the monaural tone on the second channels pair (2 and 3) // loop on the two channels for (nChannel = 2; nChannel <= 3; nChannel++) { // add a 3 seconds long 400 hz sine tone on the current channel audioSoundEditor1.SoundComposer.ItemWaveToneAdd ("", nChannel, enumSoundGenWaveTypes.SOUNDGEN_WAVE_TYPE_SINE, 400, 1.0f, 3000, 0, ref nDummyUniqueId);
// add a 3 seconds long 480 hz sine tone on the current channel audioSoundEditor1.SoundComposer.ItemWaveToneAdd ("", nChannel, enumSoundGenWaveTypes.SOUNDGEN_WAVE_TYPE_SINE, 480, 1.0f, 3000, 0, ref nDummyUniqueId); }
// generate the final audio stream audioSoundEditor1.SoundComposer.SessionComposeItems ();
|
Once a sliding wave tone has been added to the session, you can modify some of its settings through the following set of methods:
- SoundComposer.ItemWaveToneFrequencyGet to obtain the frequency of the wave tone
- SoundComposer.ItemWaveToneFrequencySet to modify the frequency of the wave tone
- SoundComposer.ItemWaveToneTypeGet to obtain the current type of wave tone
- SoundComposer.ItemWaveToneTypeSet to modify the current type of wave tone
Differently from pure wave tones, a sliding wave tone changes its frequency and/or amplitude dynamically during a given interval of time. A sliding wave tone can be added through the SoundComposer.ItemSlidingWaveToneAdd method; as seen for pure wave tones, sliding wave tones are always mono, meaning that for each call the sliding wave tone will be stored inside a specific audio channel of the final audio stream; if for example you should need to create a stereo sliding wave tone on the front speakers of the audio stream, you would need to call this method twice, once for channel 0 and once for channel 1.
Once a wave tone has been added to the session, you can modify some of its settings through the following set of methods:
- SoundComposer.ItemSlidingWaveToneLimitsGet to obtain the initial and final frequency and/or amplitude of the sliding wave tone
- SoundComposer.ItemSlidingWaveToneLimitsSet to modify the initial and final frequency and/or amplitude of the sliding wave tone
- SoundComposer.ItemSlidingWaveToneTypeGet to obtain the current type of sliding wave tone
- SoundComposer.ItemSlidingWaveToneTypeSet to modify the current type of sliding wave tone
Noises are obtained by generating a random signal with a constant power spectral density. The component allows adding to a session white, pink and brown noises through the SoundComposer.ItemNoiseAdd method; as seen for pure wave tones, noises are always mono, meaning that for each call the noise will be stored inside a specific audio channel of the final audio stream; if for example you should need to create a stereo noise on the front speakers of the audio stream, you would need to call this method twice, once for channel 0 and once for channel 1.
Once a noise has been added to the session, you can modify some of its settings through the following set of methods:
- SoundComposer.ItemNoiseTypeGet to obtain the current type of noise
- SoundComposer.ItemNoiseTypeSet to modify the type of noise
DTMF (Dual Tone Multi Frequency) tones are used for telecommunication signaling over analog telephone lines in the voice-frequency band between telephone handsets and other communications devices and the switching center. A sequence of DTMF tones can be added to the session through the SoundComposer.ItemDtmfStringAdd method; as seen for pure wave tones, DTMF tones are always mono, meaning that for each call the sequence of DTMF tones will be stored inside a specific audio channel of the final audio stream; if for example you should need to create a stream of DTMF tones on the front speakers of the audio stream, you would need to call this method twice, once for channel 0 and once for channel 1.
Once a sequence of DTMF tones has been added to the session, you can modify some of its settings through the following set of methods:
- SoundComposer.ItemDtmfStringGet to obtain current settings of the DTMF string
- SoundComposer.ItemDtmfStringSet to modify settings of the DTMF string
Let's see a small code snippet where we create a session having a stereo audio stream; on this stream we will add two mono streams, one for each channel, containing DTMF tones generated from a string of text (in order to keep the code simpler we have used a unique identifier variable for each item, in a real situation you may want to use separate unique identifier variables for each item):
Visual Basic.NET |
' prepare the audio stream to compose (44100, stereo) audioSoundEditor1.SoundComposer.SessionInit (44100, 2)
Dim nDummyUniqueId As Int32 = 0 Dim nChannel As Integer = 0
' add the string of text containing the sequence of DTMF tones on the two channels of the audio stream (channel 0 and 1) ' each tone have a duration of 150 ms with a silence of 50 ms between each tone ' in order to avoid unwanted "pops" when a tone is performed, a 10 ms fade is applied at the beginning and at the end of each tone audioSoundEditor1.SoundComposer.ItemDtmfStringAdd ("", nChannel, _ "0010123456789", 150, 50, 10, 10, 1, 0, nDummyUniqueId) audioSoundEditor1.SoundComposer.ItemDtmfStringAdd ("", nChannel+1, _ "0010123456789", 150, 50, 10, 10, 1, 0, nDummyUniqueId)
' generate the final audio stream audioSoundEditor1.SoundComposer.SessionComposeItems ()
|
Visual C# |
// prepare the audio stream to compose (44100, stereo) audioSoundEditor1.SoundComposer.SessionInit (44100, 2);
Int32 nDummyUniqueId = 0;
// add the string of text containing the sequence of DTMF tones on the two channels of the audio stream (channel 0 and 1) // each tone have a duration of 150 ms with a silence of 50 ms between each tone // in order to avoid unwanted "pops" when a tone is performed, a 10 ms fade is applied at the beginning and at the end of each tone int nChannel = 0; audioSoundEditor1.SoundComposer.ItemDtmfStringAdd ("", nChannel, "0010123456789", 150, 50, 10, 10, 1, 0, ref nDummyUniqueId); audioSoundEditor1.SoundComposer.ItemDtmfStringAdd ("", nChannel+1, "0010123456789", 150, 50, 10, 10, 1, 0, ref nDummyUniqueId);
// generate the final audio stream audioSoundEditor1.SoundComposer.SessionComposeItems ();
|
A sample of usage of the sound composer object in Visual Basic.NET and Visual C# can be found inside the following sample installed with the product's setup package:
- SoundComposer