Comparison between Classical and LeafLib model — evaluating training performance, efficiency, and optimization potential.
Published: October 2025
Dataset Volume
30,000 × 41 samples
Training GPU
Apple M1 Pro (16-core GPU, Metal / MPS acceleration)
Model Parameters
~ 4k
Accuracy
Classical Model: 98% (After 15th epoch)
LeafLib Model: 99.96% (After 1st epoch)
Training Time
Classical Model: ~0.2s per train
LeafLib Model: ~4s per train
Training time for LeafNet v1.0 is not fully optimized and will be improved in future versions. The next step is to make time consuming computation in low-level programing language instead of Python.





