orgCrisium написал
Javascript is single threaded, this means your code is actually blocking javascript execution until you are done doing your calculations. If the computations were done on the GPU then you would be doing the calculations async and not stalling the cpu.
Yes, we are aware of all that. However, GPU side skinning puts limitations on the number of bones that can influence a mesh due to the available number of uniforms, through which we can push the bone matrices. Spine also allows switching between meshes for a single slot as part of an animation, which again will change the bone matrices that will have to be uploaded. When all taken together, we end up with a lot more batches to be submitted to the GPU than if we calculate all vertex data on the CPU and upload once (caveat: different blend modes/textures inside the same skeleton). We've benchmarked this and found it to not be an issue.
This is wrong. Blend stage has nothing to do with the AlphaTest value. The AlphaTest value is to help the fragment shader not to fill the zbuffer where alpha values are rejected by the function of the AlphaTest value. In ThreeJS I don't think they have a function, so the logic is usually hardcoded in the fragment shader with something like:
if (a < AlphaTest){
reject;
}
This prevents it from zbuffer values for the transparency part of the image.
This is correct according to the threeJS docs, and according to the behaviour of rendering when unsetting this value:
https://threejs.org/docs/#api/en/materials/Material.alphaTest
Using double sided is also on purpose, as flipping the skeleton would otherwise result in incorrect rendering.
This is not a valid reason. You are forcing double sided for everybody. This is a decision the user must make depending on what they want to achieve. And as I stated your geometry is drawn in the wrong direction!
No, it also enables lighting on both sides of the skeleton, a use case users have. We opted for setting this as the default, and it can be customizer via modifying the doubleSided
and side
attributes of the material parameters. The winding of the vertices is the same in all our runtimes.
This means all gpu attributes are standard attributes need to render an image (nothing amazing about that). This is also the reason I can use my own shader by circumventing yours.
Yes. we use standard attribute names, which will resolve correctly for many materials that come with threeJS. But that is not necessarily true for custom materials. I.e. we lack normal attributes.
... textures and materials ...
So the problem with all of this is as follows. Here's how the whole thing is setup:
- A Spine skeleton contains 0 or more attachments (==meshes).
- Each attachment references one texture page from a texture atlas.
- Different attachments in the same skeleton can reference different texture pages. Their materials will not be compatible.
- Different attachments in the same skeleton can have different blend modes. Their materials will not be compatible.
- Each of the attachments is ultimately converted to a
MeshBatcher
, which is really a THREE.Mesh
attached to the SkeletonMesh
which is a THREE.Object3D
, in SkeletonMesh.updateGeomtry()
.
- Subsequent attachments of a skeleton all go into the same
MeshBatcher
if the texture and blend mode they need is the same, and there's enough space in that MeshBatcher
in terms of vertices/indices.
- Every frame, you call
SkeletonMesh.update()
, which will advance the animation and reconstruct the MeshBatcher
s based on the latest animation data.
- Then, you tell threeJS to render all meshes of all
THREE.Object3D
s in the scene. A single SkeletonMesh
will contribute 1 or more MeshBatcher
s for rendering, each representing one or more attachments.
Now, if we use the same Material
for all MeshBatcher
instances, we'll run into issues as soon as the texture or blend mode between two attachments of a skeleton is different. Imagine a skeleton for a character. It's limbs, torso, and head attachments are reference on texture page. However, it also has an attachment of a halo effect on top of its head, which ended up in a different texture page in the texture atlas. The limbs, torso, and head attachments will thus go into one MeshBatcher
, and the halo into another.
Now, if those two MeshBatcher
instances shared the same Material
, then the texture assigned in the uniform from the first batcher would get overwritten by the texture assignment for the second batcher.
When threeJS then starts rendering the batches, it will render the first batch with the wrong texture.
And this is why we can't share materials, neither within batches of the same skeleton, nor across skeletons. If you know that in your specific application, all skeletons and their attachments share the exact same texture, then yes, you can use a single material for everything. But we can not anticipate this in our code, nor check for it in a smart way. Instead, we need to instantiate our own Material inside SkeletonMesh
as we run through the attachments of the skeleton, checking if the previous and current attachments are compatible. If they are, great, then all the attachments in your SkeletonMesh
may end up in a single batch, with a single material. If they differ, you get two or more batches, with as many material instances.
This also explains why we use the customizer instead of letting you specify your own material. We have to instantiate materials internally on the fly. If you provide us with a material, we can't simply clone it (at least there is no such deep copying method on THREE.Material
).
Now, having said all that, what I can offer is the following. The constructor of SkeletonMesh
could take SkeletonMeshMaterialParametersCustomizer
AND THREE.Material
. If you provide nothing or a customizer, then the logic described above happens, with us instantiating SkeletonMeshMaterial
s as needed. If you proide your own THREE.Material
, we'll simply set it on all batches and trust that you know what you are doing.
How does that sound?