Explore GitLab

Discover projects, groups and snippets. Share your projects with others


  • Soundjack is a realtime communication system providing any quality- and latency- relevant parameter to the user. Depending on the physical distance, network capacities, the actual network conditions and the actual routing even musical interaction or at least compromised musical interaction is possible.

    Updated
  • Antescofo is a real-time score following and coordination language for computer music composition and performance.

    Updated
  • Real-time Multiple-Pitch Transcription

    Updated
  • Conçu en étroite collaboration avec les professeurs d’éducation musicale, Musique Lab 2 est un environnement d’aide à la pédagogie musicale basé sur Open Music, l’environnement de CAO de l’Ircam utilisé par plusieurs générations de compositeurs, musicologues et chercheurs. Outil présentant une grande diversité d’usages, Musique Lab 2 s’insère dans la pédagogie de l’enseignant, sans contraindre ses pratiques familières.

    Updated
  • In speech, a diphone is defined as a transition between two phonemes. In a musical context, this could be taken to mean not only a transition between two vocal sounds, but more generally a transition between two sounds of any kind, whether instrumental, vocal, or recorded « sound objects ». As such, the definition of a musical « diphone » could also be extended to include a single stable sound or silence. The idea of synthesis using diphones was conceived in the late 1980s by Xavier Rodet, in an attempt to address the problem of successfully synthesizing a musical phrase using both transient and stable sounds. Using traditional analog studio techniques, a series of transient and stable sounds can be concatenated, or pieced together, by splicing small pieces of tape end to end. In today’s digital studios, this is done by cross-fading. In both cases, the result usually can end up sounding far from convincing since the inner contents of the spliced or faded sounds do not generally match. In Rodet’s system of « generalized diphone control and synthesis », sounds are carefully analyzed, and it is the analysis data which is « spliced » or « fad-ed » together by interpolating the values between neighboring segments of analysis data. The resulting in-terpolated analysis can then be resynthesized, providing a much cleaner and more natural result than could be obtained through simple tape-splicing or cross-fading. Diphone was first implemented on UNIX workstations in 1988 by Xavier Rodet and Philippe Depalle using source-filter synthesis and, later, additive synthesis. There was only a very rudimentary graphical user interface which allowed analysis data segments to be placed in a consecutive manner on the screen. The first Macintosh version of Diphone, completed in 1996, was (and continues to be) programmed by Adrien Lefevre.

    Updated
  • Micro-tonal player made by Benoit Meudic

    Updated
  • The "Multiplayer" is a standalone application for processing/decoding/diffusion of multichannel audio files in various formats.

    Updated
  • A Meeting-point for the Development of OpenMusic Patches which will be used to create or work with Rhythmic Structures. For what is Rhythm: https://en.wikipedia.org/wiki/Rhythm https://www.britannica.com/art/rhythm-music http://www.signosemio.com/semiotics-of-rhythm.asp

    Updated
  • In this Project we will Design a Patch for a Chorder-System. You will often see the Challenge of Chording in Exercises of primary Harmonic Lessons. There often you get a music Scale and have to Chording this for Harmony. The Patch here we develop will do the Work for you automaticaly

    Updated
  • Max patcher for dynamic orchestration with Orchidea v.0.6

    Updated
  • This would be a first Project. So it wouldn't be a great thing to revolution the Computer Music, but it will be a Test how this programming in OpenMusic is done as best. So I hope this would give us the plausibility to test the OpenMusic IDE and this Forum. Please remember this by reading through this Forum. So in short term this is my "Hello World" Project It would be great when you can give me tips to do this better

    In the hope of good cooperation

    CreCo

    Updated
  • ircamLAB RELEASE TS2 AUDIO EDITING / TIME STRETCH APPLICATION A software that no one has built before

    ircamLAB proudly announce the release of "TS2", a major upgrade to their acclaimed TS application.

    Previous owners of the TS software can upgrade for FREE by downloading the latest version, from the ircamLAB download portal.

    A time-limited version is also available for all Ircam Forum Premium Members.

    Updated
  • Outils Personnels

    Updated
  • Old OpenMusic material

    Updated
  • EdbMails happens to be one of the most secure ways to Convert OST to PST.

    Updated
  • This project attempts to integrate both real-time additive resynthesis and spatialization in the time domain. Through Sigmund~ object, an input mono signal is analyzed and a series of partials are identified. These partials are them spatialized randomly through Spat5 with some controls from the users. The output signal is set default for binaural listening, but it can be changed to any multi-speaker listening environment. This patch is developed in Max/MSP, relying on Spat5 and Odot externals.

    Updated
  • Skatart is a Max for Live device which brings together concatenative synthesis techniques from Catart and synthesis techniques by mosaicing.

    Updated
  • MaxScore provides music notation in Max and Ableton Live via Max for live.

    Updated
  • TinySOL, OrchideaSOL and FullSOL are three dataset containing instrumental samples, including a wide range of extended techniques.

    Updated
  • TinySOL, OrchideaSOL and FullSOL are three dataset containing instrumental samples, including a wide range of extended techniques.

    Updated