LeafNet v1.0

Comparison between Classical and LeafLib model — evaluating training performance, efficiency, and optimization potential.

Published: October 2025

Model Overview

Dataset Volume

30,000 × 41 samples

Training GPU

Apple M1 Pro (16-core GPU, Metal / MPS acceleration)

Model Parameters

~ 4k

Accuracy

Classical Model: 98% (After 15th epoch)

LeafLib Model: 99.96% (After 1st epoch)

Training Time

Classical Model: ~0.2s per train

LeafLib Model: ~4s per train

⚠ Important Notice

Training time for LeafNet v1.0 is not fully optimized and will be improved in future versions. The next step is to make time consuming computation in low-level programing language instead of Python.

Accuracy Graph
F1 Score Graph
Loss Graph
Precision Graph
Recall Graph
Time per Epoch Graph
← Back to Proof