Commit 6713bf48 authored by Greg's avatar Greg

Update README.md

parent 70006912
AudioGuide is a program for concatenative synthesis developed by Ben Hackbarth, Norbert Schnell, Philippe Esling, and Diemo Schwarz. It is written in python, however, do not need to know python to use AudioGuide - the user supplies simple options files that are written in python's syntax to interact with the program.
<h1>AudioGuide</h1>
<p>AudioGuide is a program for concatenative synthesis developed by Ben Hackbarth, Norbert Schnell, Philippe Esling, and Diemo Schwarz. It is written in python, however, do not need to know python to use AudioGuide - the user supplies simple options files that are written in python's syntax to interact with the program.</p>
AudioGuide should run out-of-the box on any recent OS X with python2 or python3. The python module numpy is required, but comes preinstalled on most newer versions of python. If you want to have AudioGuide automatically render concatenations, you need csound6 installed as well.
<p>AudioGuide should run out-of-the box on any recent OS X with python2 or python3. The python module numpy is required, but comes preinstalled on most newer versions of python. If you want to have AudioGuide automatically render concatenations, you need csound6 installed as well.</p>
AudioGuide differs from other programs for concatenative synthesis in several notable ways:
* AudioGuide is not realtime and therefore sounds can be layered much more densely compared to realtime concatenation. Non-realtime analysis also permits more flexible and creative mapping between target and corpus descriptors as well as algorithmic accounting for overlapping corpus sounds in descriptor calculations. More info about how to control the superimposition of sounds is here.
* AudioGuide gives a large number of controls for fine tuning what sounds are included in the corpus, permitting the user to include and exclude segments according to descriptor values, filenames, restricting segment repetition, scaling amplitude, etc. See all of the options here.
* AudioGuide aims to give maximum creative control over how the sounds of the corpus are mapped onto the target. Many different configurations for normalizing corpus and target data give the user a higher degree of control over the results and permit creative flexibility in defining similarity.
* Similarity between target and corpus sounds can be evaluated using time-varying descriptors, thus giving a better sense of the temporal morphology of sounds. A full list of available descriptors is here.
* AudioGuide has a robust and flexible system for defining how corpus samples are matched to target segments. One may find the best match according to a single descriptor, but one may also define multiple search "passes", effectively creating a hierarchical search routine. One may also create boolean tests within the search function to further nuance the search process. See here.
* By default AudioGuide creates a csound score that is automatically rendered at the end of the concatenative process. However the program is also capable of creating a json output, which can be played in Max (patch provided in the distro), textfile output as well as midi output.
<p>AudioGuide differs from other programs for concatenative synthesis in several notable ways:</p>
<ul>
<li>AudioGuide is not realtime and therefore sounds can be layered much more densely compared to realtime concatenation. Non-realtime analysis also permits more flexible and creative mapping between target and corpus descriptors as well as algorithmic accounting for overlapping corpus sounds in descriptor calculations. More info about how to control the superimposition of sounds is <a href="http://www.benhackbarth.com/audioGuide/docs_v1.35.html#TheSUPERIMPOSEvariabls">here</a>.</li>
<li>AudioGuide gives a large number of controls for fine tuning what sounds are included in the corpus, permitting the user to include and exclude segments according to descriptor values, filenames, restricting segment repetition, scaling amplitude, etc. See all of the options <a href="http://www.benhackbarth.com/audioGuide/docs_v1.35.html#TheCORPUSVariable">here</a>.</li>
<li>AudioGuide aims to give maximum creative control over how the sounds of the corpus are mapped onto the target. Many different configurations for <a href="http://www.benhackbarth.com/audioGuide/docs_v1.35.html#Normalization">normalizing corpus and target data</a> give the user a higher degree of control over the results and permit creative flexibility in defining similarity.</li>
<li>Similarity between target and corpus sounds can be evaluated using time-varying descriptors, thus giving a better sense of the temporal morphology of sounds. A full list of available descriptors is <a href="http://www.benhackbarth.com/audioGuide/docs_v1.35.html#Appendix1Descriptors">here</a>.</li>
<li>AudioGuide has a robust and flexible system for defining how corpus samples are matched to target segments. One may find the best match according to a single descriptor, but one may also define multiple search "passes", effectively creating a hierarchical search routine. One may also create boolean tests within the search function to further nuance the search process. See <a href="http://www.benhackbarth.com/audioGuide/docs_v1.35.html#SEARCHvariable">here</a>.</li>
<li>By default AudioGuide creates a csound score that is automatically rendered at the end of the concatenative process. However the program is also capable of creating a json output, which can be played in Max (patch provided in the distro), textfile output as well as midi output.</li>
</ul>
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment