AI, Camera and Production Systems
We treat AI not as a slide title but as a bolt on the line: it has to do a job, produce a measurable return, and be replaceable when the time comes.
Models trained on-site — not in the lab
Models trained on public datasets look great in demo and crash on the line, because your lighting, camera, product variant and belt speed are different. AIOR's approach: collect 1-2 weeks of on-site data, label it (with remote assistants if needed), train the model and validate on the line. The result: not a 1,000-sample benchmark score, but consistent performance over 30 days of real production data.
What works well
- Surface QC: scratches, stains, colour drift, missing parts.
- Plate / label reading (OCR): logistics, warehouse, vehicle gate.
- Counting: cartons, pallets, end-of-line piece verification.
- Anomaly detection: 'different from normal' behaviour (sound, vibration, heat).
- Work-safety: helmet, goggles, gloves, hearing-zone PPE verification.
What doesn't work well
Honestly: when data is sparse, rules change constantly, or labelling is expensive (e.g. requiring expert-radiologist-level annotation), AI investment usually doesn't pay back. In those cases classic rule-based systems or simple statistical models are a better fit. We answer "should this even be AI?" first.
After deployment: monitoring and drift control
The day a model goes live is the start of its lifetime. Production itself drifts (new variant, lens dust, seasonal light), and the model must be retrained. AIOR continuously monitors output; when drift appears we alert, and a small data round refreshes the model. The payoff: year-one performance preserved.
Out of scope: no black-box miracle claims
Which feature the model uses, the metrics it's measured by, and its error profile (false-positive vs false-negative ratios) are shared openly. We don't make hollow "AI with 99% accuracy" claims because the right number depends on production conditions, and transparency pays off more for everyone in the long term.