- Изменено
Spine.Event Audio
Hi Nate,
If we want to work with audio in reasonable way we need :
bool Spine.Event.isAudio;
float Spine.Event.Volume;
Action<float> Spine.Event.VolumeCallback;
Where VolumeCallback is for animation of volume over time so that we can adjust volume levels over time in Runtime according to what animators set.
Also panning implemented the same way as volume would be very much appreciated.
Thank you, Marek.
I think audio details shouldn't be handled in Spine, nor stored in skeleton data.
Good mixing should be done in the engine or in audio tools incorporated into the engine, and stored accordingly.
Spine editor audio is for syncing with sound timings.
There's a million other things where you'd want to do things in Spine. Some things make sense architecturally, some sort of fall out of scope.
That said, on top of the data that events can already store (int, float, string), I think some supplementary information that's actually labeled can make sense to store in the skeleton, so the skeleton can be self-documenting. It gets sort of annoying to open an old skeleton you were working on and have no idea what all the values and extra bones and event data means.
That would fall under this: Name/value annotations · #25 · EsotericSoftware/spine-editor
I would oppose here that Spine Editor is for syncing only. Since Nate provides now nice video export with sounds included we use the spine for video production. Regretfully we are not able to adjust volume of an audio event or control volume levels over time as keyed value. Which means work overhead for every volume level of a sound that we need. Which then makes the whole spine video production a bit tedious from sound point of view.
And also for Runtime we use technique that is we just copy sounds from spine to unity and use script that is right now checking e.Data.AudioPath is set and extract just file name and fire it with our internal soundmanager. and here we are again... if we dont have volume and panning values the whole workflow is pretty cumbersome and slow.
why is it so... since normally in production all sounds are normalised to 0dB but right now our sound engineers need to adjust each sound according to use in the animation which is simply wrong workflow thinking...
So as we see it the next spine editor/runtime evolution is to give us proper volume and panning control and pass those values in the events and animation callbacks as I proposed in the initial post.
Otherwise the whole audio implementation is just useless feature since in the real production the workflow is other way round... we first make all animations and then sound engineers creating sounds for spine exported video that they import into cubase... so here I see a bit blind spot in thinking that it is used for syncing purposes... as I wrote in reality it is always the opposite way... first animation then sound creation.
While common, animation is not always done before sound. There are certainly other workflows. Still, I do agree that keying volume/panning is useful.
Audio keys can overlap, eg: Изображение удалено из-за отсутствия поддержки HTTPS. | Показать
This means individual volume control of each key isn't possible if a single volume timeline were provided for each audio event. The plan is that each audio event key has a curve, not for interpolation between keys, but to represent the volume over the audio clip's duration. A separate curve is also needed for panning. Likely these appear in the tree properties for each audio event key.
A callback for volume changes isn't great. Likely the implementation will be that an audio Event which triggered the start of playback can be queried for the volume or panning at any playback position, eg "what is the volume at 25% of the audio clip duration?". This polling mechanism keeps things simple while still providing all the functionality needed for runtimes.
Keep in mind that many game toolkits have relatively poor audio support. It may have high latency and/or have a limited API that prevents functionality the Spine editor provides. Your game toolkit capabilities should be considered before relying volume/panning features the editor provides.
To follow up, 3.7.29-beta has volume and balance (the correct term for stereo) on events, both in setup mode and keyed (to initiate playing an audio file). However, the volume and balance is constant for the duration of the playback for each audio event key, it is not possible to change their values over time. To do that we need to allow a curve to be defined for each audio event key. 3.7 is dragging on too long already, so we've had to push that functionality to a future release.
Fair enough for 3.7 release. Thank you :-)