Why Choosing an AI Partner Is Harder Than Choosing Software
AI consulting for manufacturers is the process of selecting and working with an external partner to identify, build, and deploy artificial intelligence tools that solve specific operational problems on the shop floor. The right partner produces custom software your team uses daily. The wrong partner produces a report that describes what custom software could theoretically do.
When a manufacturer buys a CNC machine, the evaluation is straightforward. Spindle speed, axis count, tolerance capability, footprint, price. The specs exist. They can be compared. The machine either holds a half-thou or it cannot.
AI consulting has none of that clarity. Two firms can describe identical capabilities using identical language and deliver wildly different outcomes. One will embed engineers on your floor for six weeks, map your quoting workflow at the spreadsheet level, build a tool your estimator opens every morning, and iterate until the numbers move. The other will send a team of MBAs who interview your leadership, produce a 90-page strategy document, and disappear before anything gets built. Both call themselves AI consulting firms. Both charge six figures. The outcomes share nothing in common.
That is why the selection process matters more than the technology. The AI models are available to everyone. GPT-4, Claude, open-source alternatives. The differentiator is whether the firm can take those models and wire them into the specific data, workflows, and decision points that define your operation. That work requires a specific kind of expertise that most consulting firms have not built yet.
The Four Types of AI Partners for Manufacturing
Every firm pitching AI services to manufacturers falls into one of four categories. Understanding which type you are talking to is the first filter.
Type 1: The Enterprise Consulting Firm
McKinsey, Deloitte, Accenture, and their regional equivalents. These firms have AI practices staffed with data scientists and strategy consultants. Their strength is organizational change management and C-suite alignment. Their weakness in manufacturing is distance from the floor. The team that scopes your project has likely never used JobBOSS, never parsed a quoting spreadsheet, never watched an estimator toggle between four systems to build a single quote. They operate at the strategy layer. For a $500 million manufacturer that needs AI governance frameworks and enterprise-wide roadmaps, the enterprise firm makes sense. For a $15 million job shop that needs a quoting tool built on top of its ERP data, the enterprise firm is selling a hammer to someone who needs a scalpel.
Typical engagement: $200,000 to $1 million. Timeline: 3 to 9 months. Deliverable: strategy document, vendor selection framework, implementation roadmap. Working software: rarely included in the initial scope.
Type 2: The SaaS Vendor with "AI" Bolted On
These are existing manufacturing software companies that have added AI features to their product. ERP vendors, MES platforms, CPQ tools. The AI is a feature inside a product you license. The advantage is integration with the vendor's existing ecosystem. The limitation is that you get the AI they built for all their customers, tuned to generic manufacturing workflows that may or may not match yours. If your quoting process, data structure, or competitive requirements differ from the median customer, the AI features will perform at the median. You cannot customize the model. You cannot train it on your specific data. You rent the capability and accept whatever the vendor prioritizes on their roadmap.
Typical cost: $1,500 to $8,000 per month. Timeline: 2 to 6 weeks to activate. Deliverable: feature access within an existing platform. Customization: limited to configuration options.
Type 3: The Freelance Data Scientist
Individual contractors or small teams with machine learning expertise. Often PhD-level talent with strong technical skills and narrow domain experience. They can build a model. They may struggle with the full stack required to turn that model into production software your team uses daily: the interface, the ERP integration, the data pipeline, the deployment infrastructure, the ongoing maintenance. A freelance data scientist is a good fit for a proof of concept or a specific analytical question. They are a risky bet for a production system that needs to run reliably every day for years.
Typical cost: $50 to $250 per hour. Timeline: highly variable. Deliverable: usually a model or prototype. Production readiness: depends entirely on the individual.
Type 4: The Manufacturing-Focused AI Builder
These are firms that build custom AI software specifically for manufacturers. They know the ERP systems, understand how quoting works, have seen the spreadsheets taped to the wall beside the CNC, and build tools that connect to the data your shop actually produces. They are smaller than the enterprise firms. They move faster. They ship working software, not strategy documents. The tradeoff is scale: they cannot deploy across 40 global facilities simultaneously, and they do not carry the brand name that makes a board of directors comfortable. For manufacturers in the $5 million to $200 million range running 20 to 500 employees, this category delivers the highest probability of a tool that actually gets used.
Typical engagement: $75,000 to $250,000. Timeline: 6 to 14 weeks. Deliverable: working custom software deployed in your environment. Ongoing: support, refinement, and expansion as usage patterns reveal new opportunities.
Red Flags That Should End the Conversation
Fifteen years of enterprise software sales created a playbook that AI consulting firms now use without modification. These signals indicate you are talking to a firm that will consume budget without producing operational results.
They lead with the technology, not the problem. If the first meeting is a demo of their AI platform rather than a series of questions about your quoting bottleneck, your ERP data structure, or your estimator's daily workflow, the firm is selling a product rather than solving a problem. The technology is a means. The operational improvement is the end. Any firm that reverses this order does not understand manufacturing.
They cannot name the ERP systems their clients run. Ask them directly. What ERP systems have you integrated with? If the answer is vague or generic ("we work with all major ERPs"), they have not done the integration work. Connecting to JobBOSS is different from connecting to Epicor is different from connecting to ProShop. The specifics matter. A firm that has done real ERP integration work will name the systems, describe the data models, and tell you where the pain points are.
They promise ROI numbers in the first meeting. A credible partner cannot estimate your return before understanding your data, your processes, and your current performance baselines. Any firm that walks into a first meeting with a projected 10x ROI is running a sales process, not an engineering assessment. Real ROI projections require your actual numbers.
Their case studies are anonymized. "A major aerospace manufacturer" is not a reference. Ask for names. Ask to call the client. A firm that cannot produce a single referenceable client either does not have satisfied clients or has not done the work they claim. Neither is acceptable.
The discovery phase is free. Free discovery means the discovery is a sales process disguised as consulting. Real discovery requires engineering time, data assessment, and workflow mapping. That work has value. A firm that gives it away is either cutting corners on the assessment or embedding the cost in an inflated build phase where you will not see it until the invoice arrives.
They staff the project with junior consultants after senior partners sell it. The classic bait-and-switch. The people in the room during the sales process should be the people building the tool. If the firm plans to hand your project to a team you have not met, the expertise you bought during the sales process is not the expertise that will be doing the work. Ask who will be assigned to the project, by name, before you sign.
The 12 Questions to Ask Before Signing
These questions are designed to separate firms that build things from firms that talk about building things. The answers reveal more about a consulting partner than any pitch deck.
- What manufacturing ERP systems have you integrated with in the last 12 months? You want specific names and versions. JobBOSS 2024, Epicor Kinetic 2023.2, ProShop. If they say "SAP" and your shop runs E2, the experience gap is real.
- Can I talk to a client in my size range and industry? A firm that works with $500 million automotive OEMs has different expertise than one that works with $20 million job shops. The best reference is a shop that looks like yours.
- What does your team look like on day one of the build? You want names, roles, and where they will physically be. Remote is fine for software development. The discovery phase should involve someone walking your floor, watching your estimator work, and reading your quoting spreadsheets firsthand.
- What is the first deliverable, and when do I see it? The answer should be a working piece of software, not a document. If the first deliverable arrives in week 8 and it is a requirements specification, the firm is running a waterfall process that will take six months to produce anything usable.
- How does the system connect to my ERP? You want a technical answer. API, database connection, scheduled export. If the answer involves "our platform" rather than your data, the firm is selling you their product rather than building yours.
- What happens if the first version does not perform? The right answer describes an iteration process. The wrong answer describes a change order.
- Who owns the code? If you pay for custom software, you should own the code. If the firm retains ownership and licenses it back to you, you are renting, not buying. Understand this before the engagement starts.
- What data do you need from me, and what do you do if the data quality is poor? A firm that expects clean, well-structured data has not worked with many manufacturers. Real shop data is messy. The right partner has a process for handling that reality.
- How do you measure success? The metrics should be operational: quote turnaround time, win rate, margin accuracy, estimator throughput. If the metrics are engagement-based ("user adoption," "model accuracy"), the firm is measuring their deliverable rather than your outcome.
- What is the ongoing cost after the build? Hosting, maintenance, model updates, support. A $150,000 build with $5,000 per month in ongoing costs is different from a $150,000 build with $1,500 per month. Get the full picture.
- What happens if my key estimator leaves during the project? A firm that has done this before will have an answer. Knowledge capture is a core part of any manufacturing AI project, and the team should account for the risk of losing a subject matter expert mid-build.
- Have you ever told a manufacturer not to do this? The best firms walk away from projects that will not produce meaningful results. If they have never turned down an engagement, they take every project regardless of fit, and that means some of their projects fail.
When You Need Advisory vs. When You Need a Build
Advisory and build engagements solve different problems. Choosing the wrong format costs time and money even when the partner is competent.
Advisory Makes Sense When:
- You have not identified the specific problem AI should solve. The shop floor has multiple bottlenecks and you need help prioritizing which one to address first.
- Leadership is not aligned on whether to invest. An advisory engagement produces the data and the business case that gets the CFO to approve the build budget.
- You need to evaluate your data readiness before committing to a build. A two-week assessment phase can reveal whether your ERP data is sufficient to train a useful model or whether you need six months of data cleanup first.
- You are comparing multiple potential AI applications and need a framework to decide which one to build first.
A good advisory engagement takes 2 to 4 weeks, costs $15,000 to $40,000, and produces a specific recommendation with defined scope, timeline, data requirements, and expected outcomes for the build that follows. If the advisory phase runs longer than six weeks or costs more than the initial build phase, the ratio is wrong.
Build Makes Sense When:
- You know the problem. The quoting process takes too long, costing you bids every month. Your senior estimator retires next year and nobody carries what he knows. On-time delivery has dropped three quarters in a row and the floor has no visibility into what is slipping.
- You have ERP data covering at least two years of job history.
- You have an internal champion who will use the tool daily and provide feedback during the build.
- The budget is approved.
When these conditions are met, advisory delays the outcome without adding meaningful clarity. Build directly.
What a Good Engagement Actually Looks Like
The shape of a well-run manufacturing AI project follows a specific pattern. The timeline below reflects what happens when the partner knows the domain and the manufacturer has the data.
Week 1: Floor time. The partner's engineers spend two to three days on-site. They watch the estimator quote a real RFQ. They sit with the scheduler during the morning meeting. They export a sample dataset from the ERP and assess what is there. They document the workflow at the spreadsheet and email level, not at the process-diagram level. The output is a scope document that describes what they will build, how it connects to your data, and what the first version will do.
Weeks 2-3: Data preparation. Historical job records are exported, cleaned, and structured. The partner identifies gaps in the data and works with your team to fill the ones that affect matching accuracy. Material pricing data is organized. Customer history is assembled. This work is unglamorous and essential. Your ERP data is more valuable than most shops realize.
Weeks 3-6: Build. Working software appears in stages. The first functional version usually arrives by week 4. The estimator tests it on real RFQs alongside the existing process. Feedback from actual use drives daily adjustments. The interface evolves based on how the team actually works, not how a requirements document said they would work.
Weeks 6-8: Testing and refinement. The system runs on live work. Quotes generated with the AI tool are compared against quotes generated manually. The numbers either confirm the system adds value or they reveal where the model needs adjustment. This phase produces the baseline metrics that future performance gets measured against.
Weeks 8-10: Deployment. The tool becomes the primary system. Team training takes place. The feedback loop is established: every estimator override, every correction, every edge case sharpens the model. The partner provides ongoing support as usage patterns stabilize and new questions emerge.
Total timeline: 8 to 10 weeks from kickoff to deployed software. This is the timeline for a focused, single-application build like AI-assisted quoting or knowledge management. Multi-application deployments or complex multi-site operations run 12 to 16 weeks.
How Bloomfield's Approach Differs
We build custom AI software for manufacturers. That is the entire business. We do not sell a platform. We do not license a product. We do not produce strategy documents. We build tools that connect to your ERP data, run inside your operation, and get used every day by the people doing the work.
Three things separate how we work from how most firms in this space operate.
We start on the floor. Every engagement begins with our engineers spending time in your facility, watching how your team actually works. We read the quoting spreadsheets. We see the whiteboard with the job schedule. We ask the estimator which parts of the process make them want to throw their keyboard through the wall. The software we build reflects what we see on your floor, not what a sales process assumed about your floor.
You own everything we build. The code, the models, the data pipelines. We build it, deploy it, and hand it over. There is no licensing fee for your own software. There is no platform subscription. If you decide to bring maintenance in-house after a year, the codebase is yours and it is documented. We believe manufacturers should own their tools the same way they own their machines.
We measure what you measure. Quote turnaround time. Win rate. Margin accuracy. On-time delivery. If those numbers do not move, the project has not succeeded regardless of how polished the software looks. We define the success metrics during the first week and track them through deployment. Our guide to AI for manufacturers covers the broader framework for identifying where AI creates the most value.
Understanding the Cost Structure
Transparency on cost eliminates the most common source of friction in AI engagements. Here is how the numbers typically break down for a manufacturing-focused custom build.
Discovery and data assessment: $10,000 to $25,000. On-site time, data export and evaluation, workflow mapping, scope definition. Some firms include this in the build cost. We separate it because the discovery sometimes reveals that the project should be scoped differently than initially proposed, and the manufacturer should have a decision point before committing to the full build.
Build and deployment: $65,000 to $200,000. The range reflects complexity. A focused quoting tool for a single-site operation running one ERP sits at the lower end. A multi-application system spanning quoting, production visibility, and knowledge management across two facilities with different ERP systems sits at the upper end.
Ongoing support: $1,500 to $5,000 per month. Hosting, monitoring, model updates as new data flows through the system, and support for the team using the tool. Most manufacturers maintain ongoing support for the first 12 to 18 months and then evaluate whether to bring maintenance in-house or continue with external support.
The total first-year investment for a typical single-application build ranges from $95,000 to $155,000 including discovery, build, and 12 months of support. That number should be evaluated against the measurable operational improvement the tool produces. For shops where quoting speed correlates with win rate (and it always does), a tool that moves the needle by 4 to 8 percentage points on win rate pays for itself within the first two quarters.
Frequently Asked Questions
How long does a typical engagement last?
8 to 10 weeks from kickoff to deployed software for a single-application build. Discovery adds 1 to 2 weeks at the front. Multi-application or multi-site deployments run 12 to 16 weeks. The variable is data readiness. A shop with clean ERP data covering 5 years of job history moves faster than one that needs significant data preparation before the build can begin.
Do we need to hire a data scientist?
No. The consulting partner handles the AI and machine learning work. What you need internally is a project champion: someone who knows the current process deeply, will test the tool during the build, and will drive adoption after deployment. That person is usually your lead estimator, your operations manager, or whoever runs the daily scheduling meeting.
Can we start with a pilot before committing to a full build?
Yes, and a credible partner will encourage it. A proof of concept using your actual data on a defined subset of the problem takes 3 to 4 weeks and demonstrates whether the approach works before you commit the full budget. The POC produces real results on real RFQs. You see the system surface comparable past jobs, suggest pricing ranges, and flag risks using your own historical records. If the POC does not convince your team, the engagement stops. The cheapest AI project is the one that proves the concept wrong early.
What if our data is a mess?
Every manufacturer's data is, to some degree. The question is whether the mess prevents the system from functioning or whether the system can work with what exists and improve as new, cleaner data flows through. A good discovery phase answers this specifically: here is what we have, here is what we need, here is how long the cleanup takes, and here is what the system can do in the interim. Most shops are more data-ready than they think.
Should we wait until our ERP implementation is finished?
If you are mid-migration between ERP systems, yes. Wait until the new system is live and has 6 to 12 months of job data. If your current ERP is stable and contains years of historical data, there is no reason to wait. The AI tool connects to whatever ERP you run. If you migrate later, the data layer gets re-pointed to the new system. The intelligence built on top of the data transfers.
Talk to a Team That Builds for Manufacturing
We will walk through your current process, assess your data, and tell you exactly what a build would look like. No slide deck. No 90-day discovery phase.
Book a Call →