· The Bloomfield Team
Why Your On-Time Delivery Number Is Lying to You
Your on-time delivery number is probably around 93%. Your ERP says so. Your scorecards say so. The slide deck you showed your biggest customer last quarter said so.
Your customer's scorecard says 74%. They are right and you are wrong. The disagreement comes down to a structural flaw in how every major ERP system calculates this metric, a flaw that flatters the shop and punishes the buyer.
Understanding why requires breaking the OTD calculation down to its components.
How the Number Gets Built
Standard OTD math: line items shipped on or before the committed date, divided by total line items shipped. Ship 200 line items in March, 186 go out by the date in the system, the ERP reports 93%.
The failure is in those three words: "committed ship date." That date moves. Constantly.
A customer orders parts on March 1 with a requested delivery of March 20. The planner reviews capacity and moves the committed date to March 25. Material arrives late. The date moves again to March 28. The job ships March 27. The ERP calls that on-time because it shipped before the current committed date. The customer needed parts on March 20. They received them eight days late. Green checkmark in the system.
A 2023 MESA International survey found that 58% of manufacturers surveyed had moved committed delivery dates at least once on more than 30% of orders in the prior 12 months. In high-mix job shops, that figure exceeds 40%. Every date change widens the gap between what the metric reports and what the customer experiences.
The Partial Shipment Problem
A customer orders 1,000 brackets for April 5. The shop ships 800 on April 4 and the remaining 200 on April 12. Many ERP configurations score that as one on-time shipment and one late shipment. Fifty percent OTD.
The customer sees it differently. They needed 1,000 brackets to run their assembly line on April 5. They got 800. The line could not run a full shift. The rest arrived a week later. The order was late.
Some shops calculate OTD by line item. Some by order. Some by unit quantity. The choice of denominator changes the number, and most shops pick the denominator that flatters them. Nobody configured the metric to answer the question the customer actually cares about: did all the parts arrive when they were supposed to.
What the Customer Measures
Large OEMs and tier-one manufacturers track suppliers on OTIF: On-Time In-Full. Complete ordered quantity, on or before the original requested date. No partial credit. No date changes.
A shop reporting 94% internal OTD might score 72% on their customer's OTIF scorecard. That 22-point gap is where the date changes, partial shipments, and measurement differences live. And it has direct commercial consequences. Many OEM customers use OTIF scores to allocate future programs. Above 90% gets first look at new work. Below 80% gets probation or removal from the approved vendor list.
The shop running at a self-reported 94% may not learn it is at risk until the quarterly business review, when that 72% appears on a red-highlighted slide.
Where the Delays Actually Start
Measuring correctly is one problem. Understanding root cause is the one that leads to actual improvement.
Most shops treat a late shipment as a production problem. The parts did not move fast enough. But tracing the delay back through the process reveals that production is the proximate cause only 35 to 40% of the time.
The rest breaks down roughly like this:
- Material delays (25 to 30%): Raw material arrived late, was ordered too late, or was the wrong specification. The job sat waiting before it could start.
- Planning and scheduling errors (15 to 20%): The job was scheduled too late for its actual processing time, or it was bumped by a rush order that came in after the original schedule was set.
- Engineering and pre-production (10 to 15%): Drawing revisions after order entry. Three iterations on first article. Programming took longer because the geometry was more complex than it looked during quoting.
- Quality issues (5 to 10%): Failed first run. Rework. A second setup and run that consumed time the schedule never accounted for.
When a shop tracks only whether the shipment left on time and ignores where in the process the delay began, the root causes stay invisible. Production absorbs the pressure. The actual bottleneck sits in purchasing, planning, or engineering.
Measuring What Matters
An honest OTD metric requires three things that most ERP configurations do not provide by default.
First, lock the original requested date. Capture it at order entry. Preserve it, unchanged, for the life of the order. The committed date can move based on capacity and material. The original request date is the benchmark. Add a field that nobody can edit after initial entry.
Second, measure by complete order. An order is on-time only when the full quantity ships by the original requested date. Partial shipments count as late until the remainder arrives. This aligns the metric with what the customer experiences.
Third, capture a reason code for every delay. Material delay. Capacity conflict. Engineering revision. Quality rework. Customer-requested change. Each code feeds a Pareto analysis that reveals where the systemic failures are and which fixes would move the needle most.
This requires reconfiguring ERP date handling, building a reporting layer that calculates OTD against the original date, and creating a simple workflow for capturing delay reasons. The data to do this exists in most ERP systems. The reporting configuration does not.
What Accurate OTD Data Reveals
A CNC shop in Wisconsin implemented honest OTD measurement in 2024. Their internal number dropped from 94% to 71% in the first month. The owner called it "uncomfortable but useful."
Within 90 days, reason code data showed that 34% of late deliveries originated in material procurement. Steel bar stock from their primary supplier was arriving 3.2 days late on 40% of orders. The shop had been compensating by padding the production schedule, adding buffer time that consumed capacity. They confronted the supplier with the data and established a secondary source. Material-related delays dropped 60% within two quarters.
Another 22% of late deliveries came from scheduling conflicts where rush orders displaced standard work. The production manager had been accepting rush requests without adjusting dates on the jobs getting bumped. Building a scheduling tool that showed the downstream impact of inserting a rush order gave the team the ability to make informed decisions about which jobs to move and which customers to call before the delays materialized.
Within six months, the shop's actual on-time delivery rate improved from 71% to 84%. Their largest customer noticed. Unprompted, the purchasing manager mentioned at the next quarterly review that delivery performance had improved.
That happened because the shop measured what was true instead of what felt good. The number got worse before it got better. The operation improved because the metric finally pointed at the right problems.
Find out what your real OTD number is
We will help you build an OTD metric that measures what your customers actually experience and shows you where the delays originate.
Talk to Our Team →