clandestine

  • 27 фев 2016
  • Регистрация: 22 дек 2014

    Ah, good catch Nate. Of course I should be mixing the entire pose from the first animation!

    It's totally possible. In Spine C#, I subclassed TrackEntry to allow a simple weighted blend between two animations in the mixing phase:

    public class BlendTreeTrackEntry : TrackEntry {
    
        internal Animation endAnimation;
        internal float weight;
    
        public float Weight { get { return weight; } set { weight = value; } }
        public override void MixAnimation(Skeleton skeleton, float lastTime, float time, bool loop, ExposedList<Spine.Event> events, float alpha)
        {
            // Mix both animations, depending on the weight.
            Animation.Mix(skeleton, lastTime, time, loop, events, alpha * (1 - weight));
            endAnimation.Mix(skeleton, lastTime, time, loop, events, alpha * (weight));
        }
    }

    There's a few other bits that make this work in AnimationState, but the general idea is that you mix in both animations at the same time depending on weight. It works pretty well, although I think a true multi-animation and multi-dimensional blend tree is a good feature for the runtimes to support on their own.

    I should add that you could also do something like this by controlling the weight of animations in different tracks, but I like to use different track indices for functionally different things (animation layer style).

    Maybe a Spine Slack would be a good way of coordinating stuff? It's what I use for Unity and Atom discussions.

    You've got the right idea. Usually jumping is broken into multiple pieces so that you can tune it for gameplay more easily, and no matter what you do you'll want a falling animation loop because the height you fall will probably be variable.

    Now, it also sounds like you're actually moving the character in the animation. Unless you're integrating root motion (which I wouldn't recommend for jumps) you don't want to do that. Ordinarily you'd keep the character at their root position in the animation and do all the movement through physics / your movement controller.

    That is super cool! Took me a little bit to understand how much damage I was doing and how to dodge effectively (keyboard controls seem a bit finicky, not sure about dodge being on Shift), but overall real neat!

    What happens when you replace the SE animation with a test animation? Does it still behave the same way?

    Happy to help!

    On the SkeletonAsset that lists all the animations, you can set the default mix (blend time). Try changing that to 0.

    So I do something like this for my game, but I'm not using the SkeletonLit shader. Instead I'm using a variant of the Unity Standard shader that has vertex colors added that I picked up on the Unity forums (http://forum.unity3d.com/threads/standard-shader-with-vertex-colors.316529/) and then tinkered with. Take a look at the attached folder and see if that helps (you'll want to change it from tint to overlay for the vertex color, and probably remove some spurious lines).

    TrackEntry should have a pointer to both the previous and next TrackEntries. There's no way to find out the current mix status, though, because that's calculated on the fly. If you wanted to know you'd have to recreate it, or possibly modify the runtimes to cache the value.

    I'm assuming the method signature for UpdateWorld and UpdateLocal has changed. You need to make sure your ApplyRootMotion and UpdateBones methods match the delegate signature defined in SkeletonAnimation (actually SkeletonRenderer). Probably because they now expect a SkeletonRenderer and not a SkeletonAnimation, if I had to guess.

    I believe higher-index tracks overwrite keyframes, and there may be an alpha / mix option as well if you want subtle layering. Like with all animations, this will only modify those bones that actually have frames set in the particular animation.

    I'm going to guess you need to enable "Combine Subdirectories", because your source images might be coming from different directories?

    And try "Strip Whitespace" unless there's a specific reason you can't.

    Just wanted to throw my hat into the ring - a shared, supported root motion solution that deals with these issues would be super valuable. Maybe that's something we can put together as a patch for the runtimes?

    Great! Hopefully I can pull the relevant portions together.

    @Mitch: Resurrecting this thread, but is there any way we could get a copy of that forked SkeletonRenderer? I'm trying to integrate the standard Unity shader with Spine, and everything works except for normal maps at this point.

    Just wanted to say this reminded me to finally fix my stupid, stupid modification of BoneData. Because I have multiple copies of the same skeleton, I ended up having it just create duplicate copies in memory so that I could modify the initial position of some bones on a per character basis. The Goblins example (using UpdateLocal) is obviously the correct way to do it, and now I'm not allocating 30mb for every additional character.

    Just saw the Git checkins and had to come look at this. Super rad work, Mitch and Pharan!

    Nate написал

    Yes, you'd need to duplicate the setup pose for each unique setup pose. Easiest to just get it working is to just load the JSON multiple times, but you can duplicate SkeletonData and reuse the same skins, attachments, animations.

    I think I ended up hacking it to duplicate SkeletonData, but I'm not sure it's reusing the other data. Pretty sure I'm over allocating the hell out of things 🙂 I'll have to dig into it and figure out the best way in the confines of how the Unity runtimes work.