The 9-Question RFQ That Filters Out Vendors Who Can't Deliver
Most RFQs collect prices. These nine questions collect proof of operational capability.

A manufacturing plant in the American Midwest issued an RFQ for six collaborative robots in 2023. Seventeen vendors responded. Fifteen quoted within 15% of each other on unit price. Two were eliminated for missing specifications. From the remaining thirteen, the procurement team selected the one with the lowest quote and fastest promised delivery.
Nine months into deployment, three of the six units were in a service backlog queue. The vendor's nearest field technician was a four-hour drive away. The integration with the facility's SCADA system had never been completed — the vendor's scope of work had implicitly excluded it. Annual maintenance costs were running at 22% of the purchase price, double what the vendor had implied in the sales process.
This story is composite, but every element in it is a documented failure pattern in capital equipment procurement. The RFQ collected prices. It did not collect proof that the vendor could operate in this facility.
The nine questions below are designed to do the second thing.
Why Standard RFQs Fail for Robotics
A commodity RFQ — for fasteners, for office furniture, for fleet vehicles — can reasonably focus on price, lead time, and specification compliance. The product is well-understood. Failure modes are predictable. Support infrastructure is broad.
Robotics and automation equipment is a different category. The product is complex, the installation is bespoke, the integration surface is large, and the ongoing relationship with the vendor is effectively mandatory for the life of the equipment. You are not buying a unit; you are entering a 5–10 year operational relationship.
A procurement process that treats robotics like a commodity will select on price and then spend years paying for it in support costs, integration debt, and underperforming equipment.
The nine questions below force vendors to describe how they actually operate — not what the system can theoretically do under ideal conditions.
The 9 Questions — With Scoring Rubric
Use these in writing. Require written responses. A vague answer to a specific question is itself a data point.
Question 1: Describe your installation process at a site comparable to ours. What infrastructure did the customer need to provide, and what did you provide?
What you're testing: Whether the vendor has actually deployed at a site with similar floor plan, electrical supply, network infrastructure, and workflow complexity. Vendors who have only deployed at greenfield sites or at facilities with dedicated IT teams often underestimate what a brownfield installation requires.
Strong answer: A specific named (or described) comparable installation, a list of customer-provided prerequisites (electrical specs, network segments, floor prep), a list of what the vendor scoped in, and a candid note about what was harder than expected.
Weak answer: A generic description of the installation process with no site specifics. The phrase "we work closely with the customer's team to ensure a smooth installation" with no operational detail.
| Score | Description |
|---|---|
| 3 | Specific comparable site referenced. Clear demarcation of customer vs. vendor scope. Acknowledges at least one installation challenge. |
| 2 | Comparable site referenced but vague. Scope described at category level only. |
| 1 | Generic process description. No comparable site. No acknowledgment of customer prerequisites. |
| 0 | "We handle everything" with no specifics. Refuses to describe scope boundaries. |
Question 2: What is your contractual uptime guarantee, and exactly how is uptime measured and reported?
What you're testing: Whether the vendor has a real uptime commitment or a marketing claim. Uptime definitions vary enormously. "99% uptime" measured against scheduled operating hours is meaningless if scheduled hours exclude maintenance windows that the vendor controls.
Strong answer: A specific percentage (e.g., 95% of scheduled production hours), a precise definition of how hours are counted, what counts as downtime versus degraded performance, and a reporting mechanism (dashboard, monthly report) so you can verify the number independently.
Weak answer: "We target 98% uptime" with no definition of how that's measured or reported. The phrase "best effort" appearing anywhere in the uptime language.
| Score | Description |
|---|---|
| 3 | Specific percentage with clear definition. Excludes only pre-agreed maintenance windows. Reporting mechanism named. |
| 2 | Percentage stated but measurement definition vague. Reporting exists but informal. |
| 1 | "Target" language without commitment. No reporting mechanism. |
| 0 | "Best effort" or no uptime commitment whatsoever. |
Question 3: Where is your nearest field service technician relative to our facility? What is your contractual on-site response time for a critical failure?
What you're testing: Support geography. A vendor with regional distribution of certified field technicians is operationally different from one whose service team is centralized at headquarters. The difference shows up every time something breaks on a Friday afternoon.
Strong answer: A specific city, a specific contractual response time (e.g., 4 hours for critical failures, 24 hours for non-critical), and clarity on whether that response time is for a phone call or for a technician physically on site.
Weak answer: "We have a network of certified service partners" without naming them or specifying response times. "We can usually get someone out within a few days."
| Score | Description |
|---|---|
| 3 | Technician location named. Contractual on-site response time stated for critical vs. non-critical. Phone vs. on-site distinction made. |
| 2 | Region named. Response time stated but not differentiated by severity. |
| 1 | "Network of partners" without specifics. Response time not contractual. |
| 0 | No service geography information. Response time undefined. |
Question 4: Describe the integration points this system requires with our existing infrastructure. Which integrations are within your standard scope and which are out of scope?
What you're testing: Integration realism. Most robotics procurement failures involve integration scope that was implied to be included but contractually excluded. The vendor's standard scope often covers the robot's native software. Integration with your ERP, WMS, SCADA, elevator controls, or fire suppression system may require a separate statement of work — and a separate contractor.
Strong answer: A specific list of integration types the vendor has delivered (ERP brands, WMS platforms, elevator controller manufacturers, safety system APIs), a clear statement of what is in-scope versus out-of-scope in the standard quote, and a reference to at least one comparable integration they have completed.
Weak answer: "We have an open API" with no implementation specifics. "Integration is handled by our software team" without scope definition.
| Score | Description |
|---|---|
| 3 | Specific integration types listed. In-scope vs. out-of-scope clearly delineated. Comparable integration reference available. |
| 2 | Integration categories acknowledged. Scope boundary exists but not clearly articulated. |
| 1 | "Open API" response with no implementation specifics. No scope boundary stated. |
| 0 | Integration not addressed. "Your IT team handles that." |
Question 5: What are all the cost components not included in this quote? List every line item that could add to the total cost of ownership in years 1, 2, and 3.
What you're testing: Pricing transparency and TCO honesty. Robotics and automation vendors routinely underprice the headline unit to win the deal, then recover margin on integration labor, software licenses, annual support contracts, consumables, and out-of-warranty parts.
Strong answer: An itemized list of potential additional costs — installation travel expenses, commissioning labor beyond a stated number of days, annual software subscription, maintenance contract options and their pricing, spare parts the customer should hold on-site, consumable replacement schedules.
Weak answer: "The quote is all-inclusive" with no itemization. Any quote that does not separately state the year-2 and year-3 support costs.
| Score | Description |
|---|---|
| 3 | Itemized list provided. Year 1/2/3 cost profile included. Maintenance contract costs stated. Consumables schedule included. |
| 2 | Most categories addressed. Some line items missing or grouped. |
| 1 | Vague "some additional costs may apply" with no list. |
| 0 | Claims the quote is all-inclusive with no itemization. |
Question 6: Who owns the operational data generated by this system, and can we export it in a standard format at any time?
What you're testing: Data ownership and exit optionality. Many robotics vendors lock operational data inside proprietary fleet management platforms. If you cannot export your own production records, cycle times, downtime logs, and maintenance history, you have no analytical independence and no leverage at contract renewal.
Strong answer: A clear statement that the operator owns the operational data, a description of the export formats available (CSV, JSON, API), and confirmation that data is exportable on demand without vendor approval.
Weak answer: "Your data is securely stored in our platform" without addressing ownership or export. Any mention of data being used for "product improvement" without an opt-out.
| Score | Description |
|---|---|
| 3 | Operator owns data, stated explicitly. Export formats named. Export available on demand. |
| 2 | Data ownership implied but not explicit. Export possible with vendor assistance. |
| 1 | Data stored on vendor platform; ownership and export not addressed. |
| 0 | Vendor claims rights to operational data. No export mechanism. |
Question 7: Describe your software update process. How much advance notice do you provide before an update that changes robot behavior, and what is the opt-out mechanism?
What you're testing: Change management discipline. Software-driven capital equipment can be materially changed by the vendor at any time. An update that adjusts speed parameters, changes obstacle avoidance behavior, or modifies sensor thresholds can break a carefully tuned production process overnight.
Strong answer: A specific advance notice period (e.g., 10 business days) for updates affecting operational behavior, a description of release notes and testing protocol, and an explicit opt-out or delay mechanism so you can control when updates apply in your environment.
Weak answer: "We push updates automatically to keep your system current." No advance notice commitment. No opt-out mechanism.
| Score | Description |
|---|---|
| 3 | Advance notice period stated contractually. Release notes included. Opt-out or delay mechanism exists. |
| 2 | Notice provided but informal. No contractual commitment on timing. |
| 1 | "We notify customers" with no specifics on timing or format. |
| 0 | Automatic updates, no notice, no opt-out. |
Question 8: Provide two references: one from a comparable operational site that has been running for 90+ days without vendor on-site support, and one from a customer who chose not to renew their contract.
What you're testing: Reference quality and vendor candor. Vendor-curated reference lists always show the top three customers. The reference who did not renew tells you what sustained deployment actually looks like when the sales process enthusiasm has worn off.
Strong answer: Two references meeting the criteria. The vendor may push back on the non-renew reference — that pushback is informative. A vendor who has no non-renewals is either very new or is not giving you a complete picture.
Weak answer: Three glowing references, all from the vendor's key accounts, none operational for more than six months.
| Score | Description |
|---|---|
| 3 | Both reference types provided without significant resistance. At least one reference from a comparable facility type. |
| 2 | 90-day operational reference provided. Non-renew reference declined but vendor acknowledges the pattern. |
| 1 | Only vendor-selected success references. No operational longevity information. |
| 0 | Refuses to provide references. Redirects to case studies only. |
Question 9: What is your process if this system does not meet the agreed performance targets during the first 90 days? What is the customer's exit path?
What you're testing: Commercial risk allocation. A vendor who has no clear underperformance remedy and no exit path is asking you to absorb all the deployment risk. A vendor with a clear 90-day performance guarantee, specific remedy process, and defined exit terms is distributing that risk appropriately.
Strong answer: A specific performance guarantee for the initial deployment period, a defined remedy process (e.g., the vendor returns on-site at no charge, the customer receives a service credit, the contract can be terminated without penalty if targets are not met within an extended cure period), and a stated definition of what "performance" means in measurable terms.
Weak answer: "We stand behind our products" with no specific remedy. "We've never had a customer who wasn't satisfied" (which is impossible to verify and not a remedy).
| Score | Description |
|---|---|
| 3 | 90-day performance guarantee stated. Specific remedy process defined. Exit path without penalty described. |
| 2 | Remedy process exists but informal. Exit requires negotiation. |
| 1 | "We'll work to resolve any issues" with no specific mechanism. |
| 0 | No performance guarantee. No exit path. All risk on customer. |
Scoring and Decision Rules
Sum the scores across all nine questions. Maximum is 27.
| Total Score | Decision |
|---|---|
| 22–27 | Proceed to contract negotiation. Vendor demonstrates operational maturity. |
| 16–21 | Proceed with caution. Identify the low-scoring areas and negotiate remedies before signing. |
| 10–15 | High risk. Do not proceed unless the specific gaps can be contractually closed. |
| Below 10 | Decline. The vendor is not operationally ready for your deployment. |
A vendor who scores 0 on Questions 3 or 8 should be declined regardless of total score. Support geography and reference quality are not negotiable.
How to Run This RFQ
Include all nine questions in the RFQ document. Require written responses, not verbal answers in a sales call. Give vendors 10 business days to respond — a vendor who cannot answer these questions in writing in ten days will not be able to support your deployment in the field.
Score each response independently, then compare. Where scores diverge significantly between evaluators, discuss the specific language that drove the difference — that discussion will surface assumptions about what "good" looks like that need to be aligned before you select a vendor.
The goal of this RFQ is not to generate a ranking. It is to eliminate vendors who cannot deliver before you invest in a demo, a site visit, or a contract negotiation. Time spent in vendor selection is cheap compared to time spent managing a failed deployment.
For guidance on what to look for when you get to the demo stage, see the next article in this series: How to Score a Vendor Demo — Beyond "It Looks Impressive."


