• Runtimes
  • iOS SDK - SpineView using up a lot of memory?

  • Изменено

Hi, I'm currently using the official Spine iOS SDK in my SwiftUI app and would like to ask about the following:

My spine assets (.skel, .atlas and .pngs) are ~40mb which I use to instantiate the SkeletonDrawableWrapper, but when I present them using SpineView, memory usage for my app climbs up to around 1GB. The exact numbers are not as important, more so I was wondering what could be causing this spike in memory usage.

Not sure if this helps any, but when I try profiling with Instruments it shows the system libraries (specifically Metal? IOGPUResourceCreate and mmap) producing that memory spike.

As a follow up, is there a way to selectively load textures/assets into memory when instantiating the SpineModel/SpineView? From my understanding everything is loaded? Sorry, I'm not an expert in Spine.

Please let me know if you need more details, thank you!

Related Discussions
...

The memory spike you're experiencing when using the Spine iOS SDK could be due to several factors. One possibility is that all textures and assets are being loaded into memory at once, which can be quite demanding, especially if your assets are large. This is consistent with the behavior of the Metal library, which is responsible for rendering and can consume significant memory resources when handling large textures.

To address this, you might consider implementing a strategy to load only the necessary textures and assets into memory. This can be done by dynamically loading and unloading assets based on what is currently needed for rendering. This approach is mentioned in the context of managing a large number of attachments, where it's suggested to pack an atlas at runtime containing only the images needed for the current scene or level. This way, you can minimize the memory footprint by only using video memory for the attachments you are going to render.

Additionally, you might want to explore optimizing your texture atlases to ensure they are as efficient as possible, potentially reducing their size or the number of textures being used at any one time.

  • Изменено

Following up, I made a minimum reproducible app:

import SwiftUI
import Spine

struct ContentView: View {
    @State var model = SpineModel()
    
    var body: some View {
        Group {
            if model.isReady, let drawable = model.drawable {
                SpineView(
                    from: .drawable(drawable),
                    controller: model.controller
                )
            } else {
                Text("Loading SpineModel...")
            }
        }
        .task {
            await model.retrieveSpineAssetsLocally()
        }
    }
}

@Observable final class SpineModel {
    var controller: SpineController
    var drawable: SkeletonDrawableWrapper?
    
    var isReady: Bool = false
    
    init() {
        controller = SpineController(
            onInitialized: { controller in
                controller.animationStateData.defaultMix = 0.2
                controller.animationState.setAnimationByName(
                    trackIndex: 0,
                    animationName: "exampleAnimation",
                    loop: true
                )
            },
            disposeDrawableOnDeInit: false
        )
    }
    
    @MainActor
    func retrieveSpineAssetsLocally() async {
        isReady = false
        do {
            let drawable = try await SkeletonDrawableWrapper.fromBundle(
                atlasFileName: "myAtlas.atlas",
                skeletonFileName: "mySkel.skel"
            )
            self.drawable = drawable
        } catch {
            // Should never reach here when debugging locally.
            fatalError("No bundled Spine assets found.")
        }
        isReady = true
    }
}

which reports an idle memory usage of 1.2GB in Xcode on my real device.

Strangely enough, when using the Simulator (iPhone 16 on iOS 18.2, Rosetta version), the app hovers around only 200mb.

And profiled simply using the Allocations tool in Xcode Instruments:

Sorry for not keeping it all in one thread, keep wanting to add more info.

The Instruments screenshot above is for the real device. Here is the one for the simulator:

I notice that both VM: IOAccelerator or VM: IOSurface is not present in the Statistics table view in the Instruments screenshot for the Simulator. Not sure if that helps any.

My spine assets (.skel, .atlas and .pngs) are 40mb

What are the resolutions of your PNGs? A 2048x2048 PNG can be relatively small when compressed on disk, but uncompressed takes 2048 * 2048 * 4 = 67.1MB. It's width * height * 4 bytes for RGBA (32 bits per pixel). 40MB of PNG (which is always compressed) can be quite a lot uncompressed.

The pngs vary in res, but they're below 2048x2048 (maybe like 600x1000, 600x500 etc.). While there are a lot of pngs, I only use a few at a time. Is there support for selective texture loading of any kind, in the SDK or Metal, or is that something to be done manually? Any advice/ideas would be appreciated

By default loading an atlas loads all the PNGs. Our APIs are flexible and texture loading can be selective or delayed, but I'm not sure how best to do that for the spine-ios runtime. My colleague Mario is out until next week, but he will be able to help them. Sorry for the delay! In the meantime it might help to try with an atlas containing only the PNGs you need, then see how memory usage changes.

Thanks for the insight and suggestions – unfortunately those won't work for our use case, so we'll have to wait for Mario's guidance to move forward!

What you see does indeed indicate that a lot of texture memory is being allocated.

Loading and unloading textures on the fly is currently not something the spine-ios runtime can implement. Depending on your animations, you can swap in attachments at any point in time. Unloading unneeded attachments and loading the newly needed attachments would require us to load and decode the PNGs from disk and upload them as a texture to the GPU. This would result in stalls, as that processes can take multiple hundrer milliseconds, depending on the size of your textures. Automatically keeping only a "hot" set of textures around is thus not something the spine-ios runtime will ever be able to support out of the box, as that's highly dependent on your specific use case. I'm afraid some manual work will be involved.

Could you give us more insights into how many atlases you have, how many pages those atlases have, how many images of those atlases are used at any point in time, how many skeletons and SpineView instances you are displaying at any one point in time, and so on? Maybe a little demo project that show cases the issues we can run locally?

  • Изменено

Moving a reply from email to here. The question was: When do we plan to support dynamic texture loading for iOS?

Our APIs make it possible to manage textures however it's needed. For our most popular runtimes, like spine-unity, we provide more game toolkit specific functionality, since it helps such a large number of people. While we'd love to provide all of what spine-unity can do to all runtimes, we don't have short term plans to do it for spine-ios. Even with Unity you still have a fair amount of app specific work to do.

You could consider changing to Unity to use the utilities provided there. Otherwise you could implement it yourself, possibly using spine-unity as a guide. Basically, skeleton data is decoupled from textures. You can load the data without loading any textures.

First you decide which attachments your skeleton will use. To support thousands of attachments, you probably want to use templates in the Spine editor, then at runtime duplicate the templates and customize them to use images based on naming conventions. Eg, hat-red.png would duplicate the hat template and assign the hat-red.png texture region from an atlas (though we haven't loaded images yet). This means you don't need to rig thousands of hats in the editor. You rig one, then replace it with images -- adding new attachments is just adding new images.

You'll need to do the bookkeeping to know what images are available and which template they correspond to. Consider that some attachments may use multiple images. Eg, a shirt could have 2 sleeves and a torso. This bookkeeping is pretty app specific, so isn't something we can provide for you.

Once your skeleton has the attachments you want, you need to determine what are all the images you need to render it. Likely you already know the image each attachment needs, but you may need to consider which animations you will play, since animations can show new attachments. You could even download the images at this point, if you don't ship them all with the app.

Next you pack all those attachments into a texture atlas. spine-unity can pack for you, or you can look at its code. There are many ways to pack, eg libgdx provides a FOSS texture packer (NB, I wrote it, and it's similar to what Spine uses). Once packed, you can use the atlas to render your skeleton efficiently. For each attachment, you assign a texture region in the atlas. This must be done before rendering, of course.

In this way you can have an unlimited number of attachments and still render the outfitted skeleton efficiently. I don't think spine-ios lacking a few utility functions is a big blocker for doing this.

If you don't need to scale your number of attachments into outer space, then you can use a simpler approach. Maybe you don't use templates and just rig in the editor. Maybe you don't pack at runtime. Even if you have more images than fit on a single atlas page, atlas pages can be organized so a skeleton requires just a few draw calls, even a dozen or more could fine.