Cobot Vendor Red Flags: Questions That Filter Out the Marketing
Most cobot vendors are technically competent. The ones that will waste your time are identifiable before the contract is signed.

The collaborative robotics market has matured to a point where most hardware from the major vendors is technically sound. Universal Robots, FANUC CRX, ABB GoFa, Doosan, Techman, AUBO — these are not engineering experiments. The robots work. The question is whether the application is a fit, whether the vendor's ecosystem matches your integration requirements, and whether the support model can sustain a production deployment.
The gap between a good cobot deployment and an expensive mistake is almost entirely determined by the vendor evaluation process. The right questions, asked before the contract is signed, will surface the issues that cost you time and money after delivery.
This article gives you those questions.
Section 1: Application validation
Red flag question 1: "Show me cycle-time data for this application, from a customer with this payload and path profile."
The red flag: the vendor demonstrates cycle time on their demo floor with their demo parts under their conditions. That's not your application. A UR5e running a 3 kg aluminum casting across a 600 mm transfer path at their facility tells you nothing reliable about what cycle time you'll achieve with your 4.5 kg steel workpiece, your fixture tolerances, and your cell layout.
What a trustworthy answer sounds like: "We have a reference customer in similar discrete manufacturing who's running a 4 kg part on a 580 mm transfer path at 1,400 mm/s programmed speed. Their actual cycle at the TCP in that move is 0.55 seconds. I can share the application report and connect you with their automation engineer."
What a red flag sounds like: "Our UR5e reaches 1,750 mm/s maximum TCP speed." That's the mechanical maximum, not the application-achievable cycle time under PFL constraints with your payload.
Ask for application-specific cycle time data from a reference deployment, in writing, before proceeding.
Red flag question 2: "What happens to my cycle time if I need to add a force/torque sensor or a vision system?"
Any production assembly or inspection application will eventually need either force feedback, vision, or both. Adding an F/T sensor (Robotiq FT 300-S, ATI Mini45) adds 400–600 grams to the end effector — which reduces the effective payload capacity and may require reconfiguring the safety speed limit. Adding a wrist-mounted camera adds weight and creates a visual processing cycle that typically adds 100–300 ms per frame.
If the vendor's cycle-time quote is based on a bare-arm configuration, and your application will require F/T sensing or vision, get the recalculated cycle time with those payloads before you buy.
A vendor who doesn't ask about your EOAT configuration before quoting cycle time is giving you a number that will change after delivery.
Section 2: Reference sites and real-world performance
Red flag question 3: "Can I speak directly with the operations manager at a reference site — not your sales contact or account manager?"
Vendor-curated reference calls are carefully managed. You speak with the person who had the best experience, at the site with the cleanest deployment, often with the vendor account manager on the call. The questions that would give you useful information — "what went wrong in the first three months," "how many engineering hours did commissioning actually take," "would you buy this vendor again if you knew what you know now" — don't get asked in that format.
Ask for two reference sites with similar applications and request direct contact information for the operations or automation manager. Then call without the vendor on the line. Ask these four questions specifically:
- What was the biggest implementation problem you ran into?
- How many unplanned downtime hours in the first 90 days?
- What would you do differently?
- Is the vendor's field support team responsive and technically competent?
If the vendor cannot provide a direct reference contact for a comparable deployment, they don't have a track record in your application.
Red flag question 4: "How long has the longest continuously running deployment in this application been operational?"
This filters for longevity, not just novelty. A vendor may have 50 cobots deployed — if all of them are under 18 months old, you don't yet know their long-term failure rates. An arm that's been running a stamping machine tending application for 36 continuous months has proven something. One that's been running for 9 months hasn't.
For production-critical applications, ask for reference sites with 24+ months of continuous operation. If none exist, that's a data point about the vendor's tenure in your specific application, not their overall capability.
Section 3: Software, ecosystem, and lock-in
Red flag question 5: "What is your software update policy, and have any updates ever required program re-teaching?"
Cobot operating systems are updated regularly. UR's Polyscope has released several major versions since the e-series launch. FANUC CRX's iHMI has had updates that changed path behavior. A software update that recalibrates joint zero positions, changes safe-stop behavior, or modifies the PFL force threshold can invalidate a tested and running production program.
Ask specifically: "In the last three years of software updates, have any required customers to re-teach programs or recalibrate safety configurations?" If yes — what is the vendor's process for customer notification, and do they support rollback to previous versions during a production program?
Vendors who can't answer this question specifically have not thought through the production impact of their software lifecycle.
Red flag question 6: "What happens to my data and programs if I switch vendors?"
This is the software lock-in question. UR programs are stored in Polyscope's proprietary format (.urp). FANUC CRX programs are in FANUC's TP format. ABB programs are in RAPID. None of these are interoperable. If you build 20 cells on one vendor's platform and then switch vendors, your programs must be re-written from scratch.
For a company evaluating a multi-cell deployment, this is a strategic decision, not a procurement detail. Ask the vendor: "If I deploy 5 cells with you now and want to standardize on a different platform in three years, what is the migration path for my programs?" A vendor who presents this as a non-issue is either naive or not being direct with you.
Red flag question 7: "What EOAT is certified or validated for your robot, and can I use third-party grippers without voiding support?"
The EOAT ecosystem varies dramatically by vendor. Universal Robots has UR+ with over 300 certified peripherals — force/torque sensors, grippers, vision systems, conveyors, dispensers. Certification means the peripheral has been tested for compatibility and the vendor supports the integration.
FANUC CRX's ecosystem is smaller. Doosan has a growing partner network. ABB's GoFa integrates cleanly with ABB's own accessories but has fewer third-party certifications.
Ask: "If I use a Robotiq 2F-140 gripper with your robot, what is your support policy for integration issues? Will you support the gripper integration or only the robot itself?" Some vendors disclaim any support for non-certified peripherals — which is their right, but you need to know before you specify your EOAT.
Section 4: Safety and compliance support
Red flag question 8: "Will you provide the risk assessment for this deployment, or is that my integrator's responsibility?"
Risk assessment responsibility varies by vendor and by deployment model. Vendor direct deployments sometimes include application engineering that covers risk assessment. Deployments through an integrator typically place risk assessment responsibility on the integrator. Deployments where the buyer programs the robot themselves often leave the risk assessment to the buyer.
This matters for ISO 10218-2 compliance. If no one in the deployment chain has taken explicit responsibility for the risk assessment, it won't get done — or it will get done after the fact when an auditor asks for it.
Ask explicitly who owns the risk assessment, and get the answer in writing in the commercial terms.
Red flag question 9: "What is the IP rating for the robot, and have you deployed this model in a wet or contaminated environment? How did it perform?"
The IP rating question surfaces environmental compatibility issues before hardware is purchased. UR3e/5e/10e/20 are IP54 — dust-protected, splash-resistant, not rated for coolant flood or washdown. FANUC CRX-10iA and ABB GoFa are IP67 — suitable for directed fluid jets. Neither is IP69K (high-pressure washdown).
If your environment has coolant, process water, chemical mist, or cleaning cycles, the IP rating must be verified against the actual conditions — not just the nominal rating. IP54 in a machine tending cell with coolant mist is a mismatch that will manifest as joint seal failures at 18–24 months.
Section 5: Support and TCO
Red flag question 10: "Where is your nearest field support technician, and what is your on-site response SLA?"
Remote support resolves most software and configuration issues. Hardware failures — joint module replacement, controller failure, cable harness damage — require on-site technicians. If your nearest vendor field support is 300 miles away, a production-stopping hardware failure means 1–2 days of downtime while the technician travels.
Ask for the specific response SLA in the commercial terms: "What is your committed on-site response time for a production-stopping fault? Is it covered by the warranty, or does it require a support contract?" Get the contract terms for Year 2 and Year 3, not just the warranty period.
Red flag question 11: "What is the expected joint module lifespan at our duty cycle, and what does a joint module replacement cost?"
Cobot joints are not infinitely serviceable. Joint modules — the integrated motor/gearbox/sensor assemblies — have finite lifespans at rated duty. At 24/7 operation at rated payload, joint module replacement intervals are typically quoted at 20,000–40,000 hours per joint, but actual intervals depend heavily on application loading, orientation, and cycle.
A joint module replacement costs $3,000–$8,000 per joint depending on the model and the joint position (wrist joints are cheaper than shoulder joints). A UR10e has 6 joints. At 20,000 hours MTBF and 6,000 hours per year operation, you're looking at joint replacements in years 3–4. If you haven't budgeted for this in your 5-year TCO model, the model is wrong.
Red flag question 12: "Has the software that runs this robot been updated in a way that required changes to existing production programs in the last 18 months? How much notice did you give customers?"
This is the operational maturity question. A vendor with a production-grade software lifecycle gives customers 90+ days notice of breaking changes, provides migration tools, and maintains the previous version in a support window. A vendor that releases updates on a consumer-product cadence with short notice and limited backward compatibility is a vendor whose software lifecycle will cause you production disruption.
The answer tells you whether this vendor treats their robot as a production tool or a consumer device.
The filtering scorecard
Run each vendor through these questions. Grade each answer:
- Pass: specific, evidenced, contract-able
- Defer: "we'll get back to you" — acceptable once, red flag twice
- Fail: vague, deflected, or not answered
A vendor who passes 10–12 of these questions has a mature commercial program. A vendor who passes 6–8 has gaps in either experience or process. A vendor who passes fewer than 6 is not ready for a production-critical deployment in your application.
The cobot market has enough technically competent vendors that you don't need to accept a vendor with a weak commercial program. The questions above filter for commercial readiness, not just engineering capability — and in manufacturing, both matter.
For the full 90-day pilot structure that validates vendor claims with real production data, see the previous article in this series.


