- Изменено
Spine loadtime improvements
Hi,
In our game we have big skeleton and atlas files and we were noticing a big performance issue running Spine on old devices. After profiling the code, I noticed that the major problem was the loading time of the skeleton and atlas files of some complex objects. In our case, our main spine object was taking 5.8 seconds just to load on a iPad mini, without any texture.
After profiling, I came up to the following conclusions:
- The JSON parser used by Spine is very slow
- The JSON format itself is very slow for big skeletons
- The Atlas file format used by Spine is also very slow
In order to solve these issues, I created 2 projects that can be found on the following repos:
https://github.com/atdrez/opack
https://github.com/atdrez/sharpjson
OPack is a binary format that is fully compatible with JSON. It's optimized to avoid key/value duplications which drastically reduces the size of the file and load time. It can reduce more than 3x the original file size (in our case, the biggest JSON stripped had 1.4mb, it reduced to 397kb). It's also much more faster then other binary JSON formats like BSON and UBJSON.
It runs roughly 5x faster than the original Spine Json parser, which reduced drastically the loading time of our application.
SharpJSON is a standard Json parser but 2x faster than the current JSON Parser used by Spine.
Here are some benchmarks (tested on a MacBookPro 2.9 GHz Intel Core i5, 8GB Ram) using one of our biggest skeleton data. It shows the decoding time for each parser (looped in 5 steps):
OPack: 474ms
SharpJson: 1257 ms
JsonSpine: 2525ms
Kernys.BSON: 2480
MiniJSON: 3043ms
PatrickVanBerguer's JSON: 3591ms
In order to integrate both with Spine is fairly easy; it's just a few lines of code.
It would be great if these optimizations goes to upstream. Let me know if I can be of any help.
Best regards,
Adriano Rezende
SharpJson is interesting if it can provide that 2x deserialize performance for spine-csharp.
Let's see what Nate says. There may be some other things to it.
For binary, have you tried using Spine's binary export instead?
Hi Pharan,
I've actually changed sprite-csharp locally to use SharpJson and boosted the loading time.
There is no much effort to it, just need to change "sprite-csharp/Json.cs" content to the following code:
namespace Spine
{
public class Json
{
public static object Deserialize(TextReader text)
{
var parser = new SharpJson.JsonDecoder();
parser.parseNumbersAsFloat = true;
return parser.Decode(text.ReadToEnd());
}
}
}
Regarding SkeletonBinary, I was actually working on a port for C#, but it seems to imply a lot of code duplication, since it rewrites most part of the code just to handle a different input. Another problem is to add an extra level of maintenance to keep text and binary support in sync. While handling dictionary type structs (JSON like), can lead to a better code reuse.
Also, looking at the format of Skeleton Binary file, I don't think it will have big performance differences (at least in loading time). Since OPack format uses back reference, it avoids duplications either in keys or values, which in medium and large files is much more efficient and for smalls files probably it will not make much difference.
Here is the OPack specs:
https://github.com/atdrez/opack/blob/dev/SPEC_BNF.md
Since I was having to optimize all skeletons, one form to support OPack binary in the SkeletonDataAsset was adding the following code:
if (isBinary) {
var decoder = new OPack.OPack();
var skeletonData = decoder.Decode(skeletonJSON.bytes);
skeletonData = json.ReadSkeletonData(skeletonData as Dictionary<string, object>);
} else {
skeletonData = json.ReadSkeletonData(new StringReader(skeletonJSON.text));
}
BR,
Adriano
I'm afraid JSON (in any form, UBJSON, etc) is the wrong format to use when speed and/or memory usage is a concern. A dictionary or even a text based format cannot compete with Spine's binary format, which is the smallest, most efficient representation of the data. As soon as the runtimes can support v3, we'll finally add binary support to them all.
Note that typically the hotspot is reading floats, of which there can be many.
That said, SharpJson sounds interesting, since there is no reason for reading JSON to be slower than necessary even if it is not the most efficient format. I'd be happy to include it in spine-csharp. Pharan, maybe you'd like to take a crack at that?
BTW, I've had some fun writing a JSON parser myself, using Ragel. You can check it out here if you're bored. It is extremely lenient, so much so that I invented what I call the "minimal JSON" format where quotes and even commas are optional and it supports C style comments.
Hi Nate,
I understand your feelings regarding some binary JSON formats available. Most of them like BSON, just dumps key/value pairs, which is totally useless and almost invalidates the point of using binary in first place. Others, don't even optimize types, values or use index tables.
I also understand that a specific a binary format, avoids the necessity of storing keys and types, but in the other hand not storing them, as Spine's binary format does, also brings some disadvantages that reflects in poor performance and memory usage for big files.
Follow some disadvantages I see in using the current Spine's binary format:
Since it relies on immutable sequenced values (to avoid storing keys), all values should be dumped even if it's a default value. While, in a key/value format, you can simply remove them to reduce file size and parsing time.
All values are dumped even if there are duplicates in the document, for example, (x = 0, y = 0) dumps 2 floats every time it finds it, instead of using back reference storing them in a single byte that indicates the index in a reference table)
Since there are no type indication, all values are stored without reducing its type, floats are stored as floats even if they can be stored as a byte, short or Q-numbers; and this is a very common case. These downcasts would reduce a lot the file size specifically for Spine skeletons.
For instance, in our case OPack gives a more compact file size than Spine's binary format. Our biggest skeleton has 397kb (344 kb, if removing default values) using OPack, while using Spine's binary format it generates a 520 kb file.
Note that this difference is without using any type reduction, we are still storing float as float. I believe after adding new downcasts to OPack type list, like FloatAsByte, FloatAsShort, FloatAsQN it will increased drastically this difference.
Also note that I'm totally in favour of using a binary format made specifically for Spine. It's just the case that the current format does not have any optimization flags in order to address the problems we are facing. If we versioned the current spine binary format to support more robust optimizations I would switch immediately to use the native solution.
BR,
Adriano
It's very fast to read a 4 byte float (likely the input is buffered). Sure, we could read 1-5 bytes and do some logic to rebuild the float, but this is an optimization for the binary format file size. If optimizing for speed and memory usage, the binary format file size isn't very important. Also most floats in Spine data are positions, mesh vertices, and UVs. They will very rarely fall on integer values and will require 4 bytes anyway, or 5 bytes if you use some bits for file size optimization.
Omitting information where possible to have a smaller file size is good (as long as it doesn't complicate the parsing too much). You may do less IO and save some time there, but I don't think it will make much difference compared to the overall time. Memory usage won't be affected.
Instead of file size, a better comparison would be deserialization time (even memory is unlikely to be an issue). I realize spine-csharp is still missing SkeletonBinary
sorry!
Another option is to use the binary format as is along with something like LZW, which is very fast compression. In some cases LZW is faster than no compression, solely from reduced IO, though again I don't think we're doing enough IO for that to be significant. The reason to use this would be to reduce on disk size. GZIP would also work, though not as fast. Then again, I expect many apps are transferred with some sort of compression, such as a ZIP file, JAR, installer, etc, and skeleton sizes of less than 0.5mb (or even a few times that) aren't terribly unwieldy.
Is it missing? I thought it was working. Mitch got it to work.
Oh you're right, I have too many things going on. :p It wasn't Mitch though!
Right, so benchmarking versus SkeletonBinary should be straightforward. Just remember to do warmup, etc like with any benchmark.
lol! of course.
Mitch just got it to work in Unity by incorporating it into SkeletonDataAsset.
That's nice, I didn't know there was a SkeletonBinary port for C#.
I've updated Spine and did some tests. It seems to have a good performance in general, although it still needs some optimizations for old mobile devices.
Using a big skeleton the results are as following (in desktop):
Skel (+BinarySkeleton): 130ms to decode file and fill skeleton struct
OPack (+SkeletonJson): 85ms to decode file + 150ms to fill skeleton struct using SkeletonJson API
Since SkeletonJson processing is far from being optimized to handle binary inputs, this benchmark does not say much; but I'll create this weekend a BinaryOPackSkeleton, so we can compare both.
Either way, I will also focus on optimizing the current BinarySkeleton API, since an adhoc solution will always be faster than a generic one.
Still, the Atlas file format is extremely slow for medium/big files. It makes a lot of sense to provide a binary format for it as well. Follow some benchmarks (in desktop):
Spine atlas format: 1130 ms to load (115kb file size)
OPack binary format: 25 ms to load (90 kb file size)
As you can see, using a binary format in this case, it's roughly 45x faster than the current text format.
BR,
Adriano