· The Bloomfield Team
Why Your On-Time Delivery Number Is Lying to You
Ask a contract manufacturer about their on-time delivery rate and you will hear a number above 90%. Usually 93 to 96%. The number comes from the ERP, calculated monthly, and it appears on scorecards, customer presentations, and the company website.
Ask their three largest customers the same question. You will hear a different number. Usually 10 to 20 points lower.
Both sides have data to support their claim. The disagreement comes from what the metric actually measures, which in most manufacturing operations is something far narrower than what the customer experiences.
How the Number Gets Built
The standard OTD calculation in most ERP systems works like this: take the number of line items shipped on or before the committed ship date, divide by the total number of line items shipped in the period. If the shop shipped 200 line items in March and 186 of them went out on or before the date in the system, OTD is 93%.
The problem starts with the phrase "committed ship date." In most shops, this date gets changed. Often multiple times.
A customer places an order on March 1 with a requested delivery of March 20. The shop enters the order, reviews capacity, and realizes March 20 is tight. The planner moves the committed date to March 25. Material arrives late on March 10. The committed date moves again to March 28. The job ships on March 27. The ERP records this as on-time, because the shipment went out before the current committed date of March 28.
The customer, who needed the parts on March 20, experienced an 8-day late delivery. The shop's OTD report shows a green checkmark.
This happens constantly. A 2023 survey by MESA International found that 58% of manufacturers surveyed had moved committed delivery dates at least once on more than 30% of orders in the preceding 12 months. In job shops with high product mix, that percentage rises above 40%.
Every time a date moves, the gap between what the metric reports and what the customer experiences grows wider.
The Partial Shipment Problem
A customer orders 1,000 brackets with a delivery date of April 5. The shop ships 800 on April 4 and the remaining 200 on April 12. In many ERP configurations, the first shipment of 800 units counts as on-time because it went out before the due date. The second shipment of 200 counts as late.
From the OTD calculation, this order is 50% on-time, 50% late: one shipment on-time, one late. From the customer's perspective, the order was late. They needed 1,000 brackets to run their assembly line on April 5. They got 800. The line could not run a full shift. The remaining 200 arrived a week later.
Some shops calculate OTD by line item. Some by order. Some by unit quantity. The choice of denominator changes the number, and most shops choose the denominator that makes the number look best. This is not deliberate deception. It is a consequence of the ERP's default settings and the fact that nobody has configured the metric to match what the customer actually cares about: did all the parts arrive when they were supposed to.
What the Customer Measures
Large OEMs and tier-one manufacturers track their suppliers using a metric called OTIF: On-Time In-Full. A shipment counts as on-time only if the complete ordered quantity arrives on or before the original requested date. No partial credit. No date changes.
The difference between a shop's internal OTD and their customer's OTIF score can be staggering. A shop reporting 94% internal OTD might score 72% on their customer's OTIF scorecard. That 22-point gap represents the date changes, partial shipments, and measurement differences between the two systems.
This gap has commercial consequences. Many tier-one and OEM customers use OTIF scores to allocate future orders. A supplier scoring above 90% OTIF gets first look at new programs. A supplier scoring below 80% gets put on probation or dropped from the approved vendor list entirely. The shop that thinks it is performing at 94% because its ERP says so may not realize it is at risk until the customer's quarterly business review, where the 72% number appears on a red-highlighted slide.
Where the Delays Actually Start
Measuring OTD correctly is the first problem. Understanding why deliveries are late is the second, and it is the one that leads to improvement.
In most shops, a late shipment is treated as a production problem. The parts did not get through the shop fast enough. But when you trace the delay back through the process, production is the proximate cause only about 35 to 40% of the time.
The rest breaks down roughly like this:
- Material delays (25 to 30%): Raw material arrived late from the supplier, was not ordered in time, or was the wrong specification. The job sat waiting for material before it could start.
- Planning and scheduling errors (15 to 20%): The job was scheduled to start too late to meet the ship date given the actual processing time required. Or it was scheduled correctly but bumped by a higher-priority job that came in after the original schedule was set.
- Engineering and pre-production (10 to 15%): The customer's drawings were revised after the order was placed. The first article took three iterations to approve. The programming took longer than expected because the geometry was more complex than it appeared in the quoting phase.
- Quality issues (5 to 10%): The first run failed inspection. Rework was required. A second setup and run consumed time that was not in the schedule.
When a shop measures only whether the shipment left on time and does not track where in the process the delay originated, the root causes remain invisible. The production team gets pressure to move faster while the actual bottleneck sits in purchasing, planning, or engineering.
Measuring What Matters
An honest OTD metric requires three things that most ERP configurations do not provide by default.
First, lock the original requested date. The date the customer asked for should be captured at order entry and preserved, unchanged, for the life of the order. The committed date can change based on capacity and material availability, but the original request date is the benchmark against which delivery performance is measured. This means adding a field to the order record that the system does not allow anyone to edit after initial entry.
Second, measure by complete order, not by line item or shipment. An order is on-time only when the full quantity ships by the original requested date. Partial shipments count as late until the remainder arrives. This aligns the metric with what the customer experiences.
Third, capture the reason code for every delay. When a committed date moves or a shipment goes out late, someone in the operation should record why. Material delay. Capacity conflict. Engineering revision. Quality rework. Customer-requested change. Each reason code feeds a Pareto analysis that reveals where the systemic issues are, and which improvements would have the largest impact on actual delivery performance.
Implementing these three changes requires reconfiguring how the ERP handles order dates, building a reporting layer that calculates OTD against the original date, and creating a simple workflow for capturing delay reasons. The data to do this exists in most ERP systems. The reporting configuration does not.
What Accurate OTD Data Reveals
A CNC shop in Wisconsin implemented honest OTD measurement in 2024. Their internal number dropped from 94% to 71% in the first month. The owner described the experience as "uncomfortable but useful."
Within 90 days, the reason code data showed that 34% of late deliveries originated in material procurement. Steel bar stock from their primary supplier was arriving an average of 3.2 days after the promised delivery date on 40% of orders. The shop had been compensating by padding their production schedule, adding buffer time that made jobs take longer and reduced capacity. When they confronted the supplier with the data and established a secondary source, material-related delays dropped by 60% within two quarters.
Another 22% of late deliveries came from scheduling conflicts where rush orders bumped standard orders. The production manager had been accepting rush requests without adjusting the delivery dates on the displaced jobs. By building a scheduling tool that showed the downstream impact of inserting a rush order, the team could make informed decisions about which jobs to bump and which customers to call with revised dates before the delays happened.
Within six months of implementing accurate OTD tracking, the shop's actual on-time delivery rate improved from 71% to 84%. Their largest customer noticed. Unprompted, the purchasing manager mentioned at their quarterly review that delivery performance had improved and asked what changed.
That conversation happened because the shop stopped measuring what felt good and started measuring what was true. The number got worse before it got better. The operation got better because the number was finally pointing at the right problems.
Find out what your real OTD number is
We will help you build an OTD metric that measures what your customers actually experience and shows you where the delays originate.
Talk to Our Team →