Dear Ossia users,
[Main question]
I want to simulate a soundpainting orchestra. Is Score the right tool for me?
[Context]
I am building a soundpainting recognition tool in Max/msp (https://github.com/arthur-parmentier/soundpainting-signs-gestures-recognition) and I want to simulate the behavior of an orchestra to the requests that my tool can recognize.
Examples of requests (from the soundpainting language):
Whole Group - Minimalism - Play (ask the whole orchestra to play a minimalist loop)
Percussions - Long Tone - Slowly Enter (ask the percussion group to play a long tone within 5 seconds)
…
String 1 - Improvise - High Volume - Slow tempo - Play
My Max patch is basically recognizing each sign and converting the sentence (the request) to OSC-like commands such as “/strings/2/minimalism/start 0”, “percussions/1/longtone/volume 0.5” and so on.
[The features that would be necessary]
Because there are few videos/demos so far with Ossia, I had trouble figuring out all the features of Score and Libossia. Although I have seen that there is already a package for max and some basic tutorial on Score, I must say I could not figure out if it was suitable for my use case or not. These are the features I am looking for:
-
Most importantly separate tempos for each instrument. Unlike common DAWs, I am looking for a way to change the tempos of separate clips/tracks independently.
-
Ability to synchronize “on the fly” tempos, volumes or other parameters of the score
-
Midi items editing (I know it’s already there) with the possibility of integration randomness/generative processes
-
Possibility of grouping things in hierarchical ways such as “group/instrument/track/clip” or equivalent
I would really appreciate your feedback on these points, and hope to move on to using Score soon!
Arthur
Hello !
So, the current stable version has no notion of tempo.
However, the next version (current alpha), has a notion of hierarchic & polyrythmic tempo, which should be able to do what you want… mostly.
What does “Ability to synchronize “on the fly” tempos, volumes or other parameters of the score” mean for you, precisely ?
Midi items editing (I know it’s already there) with the possibility of integration randomness/generative processes
That’s a good idea, being able to generate some midi. Though I wonder if you wouldn’t be better served by using OpenMusic for this part as this is likely the best available tool right now for generative MIDI.
Will make a short video to give you an example of what will be available in the next version so that you can tell me if that’s what you need.
Hi Arthur,
At first look, it seems that score could do the job, altho it is importat to note that beat ant time signature synchronization is a feature of v3 that is still an alpha (as @jcelerier mentioned).
v2 that is stable alow many of the thing you describe and we actually have a sound-painting project running on it.
The gesture recognition (not in the repos) sends an index for each recognized gestures through osc, triggering different short loops or sequences. The received indexed are also learned to create software improvisation with the factor oracle process. Awaiting the rights to share pictures and videos of the residency and will post them here if I can.
Many thanks for your fast answer.
By “Ability to synchronize “on the fly” tempos, volumes or other parameters of the score”, I would like to have the ability of the parameters of one content (for instance the tempo/speed of a clip) to be able to know about the parameters of other contents.
The idea is to imitate the behavior of real performers that are able to synchronize their tempos/volumes… with other performers in real time by listening.
For instance, I would like to be able to reproduce Ableton Live syncronisation of the tempo of one clip with the global tempo of the set, so that each clip start at the beginning of the phrase and they sound like it’s a band playing, not separate contents.
Thanks Thibaud for the pointer to the rainOfMusic project. I guess it would be nice for me to know more about it, maybe you also would be interested by my contribution. Let me reach your in PM for this.
I will wait for @jcelerier videos in order to assess whether wroking with the alpha version is interesting/feasible for me!
Hello !
Stumbled under a couple of bugs when trying to make the videos, there’s still a couple months of work to get all of this in a production state so don’t hold your breath if you need to do this tomorrow :-).
First video shows how you can have multiple concurrent “timelines” with different tempos and time signatures:
Second video show how to control one of the tempos through an external control - I used a simple LFO for the example, but that could also be external OSC messages, etc.
What score doesn’t have (and likely won’t have unless someone contributes it) is tempo / bpm detection & estimation from live audio sources. That’s doable but not really a priority as there are plenty of ways to do this with good accuracy from external software and just send the estimated tempo in ossia score through OSC.
Oki, on what OS are you running ? I can make you a build with the latest fixes regarding these features (the releases on github are a bit older)
(sorry, getting windows builds to work in the current alpha requires some more work… will get back to you hopefull this week-end when I’ve more free time)
1 Like
Dear @jcelerier,
Did you manage to compile for windows?
Hello,
sorry, I couldn’t finish all the fixes I wanted so it’s likely super rough ; here’s a link
https://we.tl/t-MaOrWDDonJ
(also my windows laptop was on repair for a week which didn’t help ^^’)
No problem! Thank you a lot. Just let me know maybe if you have important fix updates
Hey,
I’ve been updating this week-end. Next focus will be on improving the piano roll per your issues so I’ll keep you posted with my progress on that
Sounds promising, thanks!