· The Bloomfield Team
A Plant Manager's Guide to Evaluating New Technology
A 2024 survey by the Manufacturing Leadership Council found that 74% of manufacturers have purchased at least one software tool in the past three years that they no longer use. The median cost of the failed purchase, including implementation time and license fees, was $127,000. That number does not include what it cost the team in morale, in trust, in willingness to try the next thing.
Plant managers sit at the center of these decisions. The C-suite wants modernization. The sales team for the vendor makes it sound seamless. The shop floor wants to know if this means more screens to check. The plant manager has to figure out which problems are real, which solutions actually match those problems, and whether the operation can absorb the change without breaking what already works.
That is a harder job than it sounds, and most evaluation frameworks are written by the people selling the technology.
Start With the Constraint, Not the Category
The first mistake is shopping by category. A plant manager hears about MES systems, reads three comparison articles, sits through two demos, and picks the one that scored highest on features. Six months later, the MES is half-implemented and nobody on the floor opens it because the real problem was never visibility into job status. The real problem was that the scheduler had no reliable way to know which jobs were actually running.
Before evaluating any tool, write down the constraint in one sentence. "Our quoting takes too long because estimators spend 60% of their time searching for historical job data." Or: "We miss delivery dates because schedule changes on the floor do not reach the front office until end of shift." Or: "When Dave retires next March, we lose 30 years of setup knowledge for our five-axis work."
If you cannot state the constraint in one sentence with a specific cause attached to it, you are not ready to evaluate technology. You are ready to map your process and find where it breaks.
Five Questions That Filter Out 80% of Bad Purchases
Once the constraint is clear, these five questions separate tools that will help from tools that will sit unused.
1. Does it solve my specific constraint, or does it solve a general category of problems? General-purpose tools require extensive configuration. A tool built around quoting workflows for job shops is different from an ERP module that includes a quoting feature. Specificity matters. The more precisely the tool matches the actual workflow your team uses, the faster adoption goes and the sooner you see results.
2. What data does it need that I actually have? Every demo runs on clean data. Your data lives in spreadsheets, emails, ERP exports with inconsistent fields, and binders on the shop floor. Ask the vendor what happens when the data is messy. Ask what the onboarding process looks like for a shop that has 15 years of job records spread across two ERP migrations and a filing cabinet. If the answer is vague, the implementation will be painful.
3. Who on my team has to change their daily routine? Technology that requires an operator to open a new application, enter data into a new screen, or check a new dashboard is asking for a behavior change. Behavior changes fail at rates above 60% without sustained management support. The best tools fit into existing routines or replace a step that people already dislike. The worst tools add steps.
4. What does this look like at month three, not month one? Month one is implementation. Month three is reality. Ask the vendor for references at the 90-day mark and beyond. Ask those references what surprised them, what took longer than expected, and what they would do differently. Month-one enthusiasm is free. Month-three usage data tells you whether the tool actually stuck.
5. Can I measure the result in dollars or hours? If the benefit is "better visibility" or "improved communication," it will be nearly impossible to justify renewal when budget season arrives. The constraint you identified at the start should have a measurable cost attached to it. Quote turnaround time. Hours spent searching for information. Rework rate on jobs where knowledge was lost. Pick a number. Track it before and after.
The Demo Is Designed to Impress You
Every software demo uses perfect data, a scripted scenario, and a presenter who has done this 200 times. That is the point of a demo. The problem is that plant managers often evaluate the tool based on the demo rather than on how the tool will perform with their data, their people, and their constraints.
For a broader look at how to approach these decisions, see our complete guide to AI in manufacturing.
Bring your own data to the second call. Upload a real RFQ, a real job history export, a real production schedule. Ask the vendor to walk through the tool with your information instead of theirs. This single step reveals more about fit than any feature checklist. If the vendor resists, that tells you something about how the implementation will go.
Adoption Is the Only Metric That Matters at Launch
The technology could be perfect for the problem. If the people who need to use it do not use it, none of that matters. Adoption is where most manufacturing technology purchases die, and it almost always comes down to three factors: the tool adds work instead of removing it, the team was not involved in the selection, or training was a single session that nobody remembered two weeks later.
Involve two or three people from the floor in the evaluation process from the start. Let them see the demos. Let them ask questions. When they are part of choosing the tool, they become advocates for using it. When a tool is chosen for them and handed down, they become the people who find reasons it does not work.
Training should happen in the workflow, not in a conference room. The best implementations put the new tool into a single process first, run it alongside the old method for two weeks, and let the team see results before expanding. Rolling out across the entire operation on day one is a recipe for exactly the kind of failure that makes the next technology purchase harder to sell internally.
The Cost of Getting It Right
A properly evaluated technology purchase that solves a real constraint and gets adopted by the team is one of the highest-leverage investments a plant manager can make. A quoting tool that cuts turnaround from five days to one can add over a million dollars in annual revenue from the same number of RFQs. A knowledge capture system implemented before a key retirement saves training costs and quality failures for years.
The cost of getting it wrong is measured in more than dollars. Every failed implementation makes the next proposal harder. The floor develops an immune response to new tools. The phrase "we tried that" becomes the default objection.
That is why evaluation matters more than selection. A rigorous process that starts with the right constraint, involves the right people, and tests with real data will surface the right tool. Skip any of those steps and the outcome depends on luck. Plant managers who build real things for a living deserve a better system than that.
Related Field Notes
Need help evaluating whether AI fits your operation?
We will walk through your specific constraints and show you what a solution looks like with your data.
Talk to Our Team →