- Изменено
Rendering animations into textures
Hello!
I am trying to make a custom outline shader in Unity. I want to render the animation into a render texture and apply shader to it. But I would like to know if there is a way to do that without rendering image from camera and instead straight from the animation. Is that even possible?
Thanks in advance.
The spine runtimes are all generating a textured mesh, in spine-unity a MeshRenderer
component is used. So general rules apply of rendering a mesh to a RenderTexture
, there is no image being generated by spine-unity that could be accessed.
Thanks! Just how I thought.
Are there any known materials on how to render mesh into a texture without using a camera? I couldn't find any information on that on the internet aside from Unity documentation which doesn't explain the process and is a bit outdated.
Unfortunately we know of no good resources either. If you should find some, sharing would be very well appreciated.
Here's what I got this far:
Unity allows to modify its render pipeline with Command Buffers.
https://blogs.unity3d.com/2015/02/06/extending-unity-5-rendering-pipeline-command-buffers/?_ga=2.221873880.1501917183.1612624884-1078945796.1596796187
I created a new CommandBuffer and added two commands to it: SetRenderTarger sets RenderTexture as target texture and Graphcs.DrawRenderer draws the content of MeshRenderer that is attached to Spine animation object into the target texture.
void Start()
{
var animation_state = anim.AnimationState;
animation_state.SetAnimation(0, "Walk", true); //start spine animation
buffer = new CommandBuffer();
buffer.SetRenderTarget(rt); //set render texture [rt]
buffer.DrawRenderer(mr, mr.material); //draw MeshRenderer [mr]
}
OnWillRenderObject apparently gets called during the rendering process so I executed CommandBuffer during it. Graphics.ExecuteCommandBuffer executes it immediately while Camera and Lightning CommandBuffer's get executed automatically during rendering so implementation of last two should be a bit different.
private void OnWillRenderObject()
{
rt.Release(); //clear the texture
mr.transform.localScale = new Vector3(100f, -100f, 1f); //switch scale to match texture pixel coordinates
Graphics.ExecuteCommandBuffer(buffer);
mr.transform.localScale = new Vector3(1f, 1f, 1f); //reset scale
}
private void OnDisable()
{
rt.Release();
}
rt.Relase() should clear the texture, I also do it on OnDisable because garbage cleaner doesn't take care of it.
This is porbably not a 100% solution as there might be some memore leakage and it might be unoptimized in general. There also an issue in that the renderer transforms Unity units directly into pixels, that means that an animation that is 3 units tall will be 3 pixels tall on the texture. The same goes for the position. That's why I changing scale before rendering which is a temporary solution and I'm looking for a correct way to do that.
Thanks very much for sharing, this looks very promising already!
Maybe you could set either the view or projection matrix of the command buffer accordingly.
Thus you could scale all three axes of the view matrix down by e.g. 0.01, or set the projection matrix up with the respective orthographic bounds: Matrix4x4.Ortho(-100, 100, -50, 50, 0.1, 100);
https://docs.unity3d.com/ScriptReference/Rendering.CommandBuffer.SetViewMatrix.html
https://docs.unity3d.com/ScriptReference/Rendering.CommandBuffer.SetProjectionMatrix.html