When to Kill a Deployment: The Data Points That Say It's Not Working
The hardest discipline in robotics program management is not launching — it's stopping. Here's how to tell when the data is saying no.

A specialty manufacturer in the Midwest spent 14 months trying to make a cobot cell work on their primary assembly line. In month 3, cycle times were 18% slower than spec. In month 5, the vendor sent a software update that resolved part of the problem. In month 7, a custom tooling redesign improved pick consistency. In month 9, an operator training refresh reduced the exception rate.
At every decision point, there was a plausible explanation for the underperformance and a credible near-term fix in front of them. The program never hit its production targets. They killed it in month 14 with a write-down and a post-mortem that made uncomfortable reading.
The people involved knew by month 7 that the business case wasn't working. The data was telling them. But the combination of sunk costs, organizational commitment, and vendor optimism kept them extending.
This pattern — knowing and not deciding — is how most failing deployments actually end. Not with a clear kill decision, but with a 14-month drift into irrelevance.
Why Killing Is Hard
The sunk cost effect is real and documented. [REPORTED across behavioral economics literature] Once an organization has committed capital, management attention, and organizational credibility to a program, the evidence threshold required to reverse the decision is significantly higher than the evidence threshold that justified the original decision. This is not irrational — it's human — but it's expensive in the context of capital-intensive technology programs.
Three specific mechanisms drive deployment programs past their rational endpoint:
Vendor escalation and extension offers. Vendors have a strong incentive to keep the deployment alive. An extended pilot at reduced cost, a software update that promises resolution, a reference visit to a customer whose numbers are better — all of these are genuine offers and all of them keep the clock running. A vendor who sincerely believes their product will work is not wrong to offer extensions. But the decision about whether to take the extension belongs to you, not them.
Internal champion protection. The person who championed the deployment has reputational stake in the outcome. This person is often the most knowledgeable about the deployment's actual status and the most reluctant to recommend a kill decision. Structuring the decision process so that the kill recommendation can come from someone other than the internal champion — typically Finance — removes this dynamic.
"We've learned so much" rationalization. Organizations do learn from failed deployments. That learning is genuine. But "we've learned so much" is frequently used to justify extending a deployment that the data says should be killed. Learning doesn't require continuing a losing deployment indefinitely; it requires a proper post-mortem, which is possible whether you kill at month 7 or month 14.
The Eight Kill Signals
These are the data points that, individually or in combination, indicate a deployment is not on a path to delivering its projected value.
Signal 1: Utilization Falling, Not Rising, Past Month 3
Effective utilization (robot actively working as a fraction of uptime) should improve through the first three months as the system is tuned, operator competency builds, and routing is optimized. A deployment where utilization is declining — or flat — past month 3 has a structural problem that minor tuning won't resolve.
Declining utilization almost always means one of three things: the application use case is not a good fit for the robot's capabilities in your environment, staff are actively routing around the robot (a change management failure), or the physical environment has more variability than the robot can handle (an application design failure). None of these are resolved by extending the timeline. They require a fundamental reassessment.
Kill threshold: Effective utilization below 60% at month 3 with no clear upward trend.
Signal 2: Vendor SLA Breached Consecutively for 60 Days
Vendors have contractual SLA obligations — uptime guarantees, response time commitments, resolution time targets. A single breach is a support ticket. Two consecutive months of breach against the same metric is a systemic problem.
The pattern: a vendor who repeatedly misses SLA on a deployment is either unable to support your specific application, under-resourced for your account, or has identified a limitation in their product that they're managing around without disclosing. None of these improve with additional time. The leverage to resolve them is in the contract — use it before it expires.
Kill threshold: Same SLA metric breached in two consecutive 30-day periods, with no written vendor remediation plan.
Signal 3: Actual Payback Period More Than 50% Longer Than Projected
When you recalculate the payback period using actual operating data — real utilization, real maintenance costs, real labor absorption — and the result is more than 50% longer than the original projection, the business case has structurally changed.
This is different from a temporary performance shortfall. It means the assumptions underlying the deployment decision were wrong by a material margin. The corrected payback period may still be acceptable — a 30-month payback that turns out to be 45 months may be fine for your capital cost. But it should be an explicit decision with the corrected numbers, not a continuation based on the original projection.
Kill threshold: Revised payback period using actual month 3–6 data exceeds original projection by more than 50%, with no specific, dated, vendor-committed improvement that would close the gap.
Signal 4: Staff Routing Around the Robot Has Not Decreased
If anonymous staff surveys at weeks 4 and 8 show high rates of self-completion (operators completing robot-assigned tasks manually), and the rate hasn't decreased materially by week 12, you have a sustained change management failure.
The key distinction: if the routing-around rate is declining, you have a training and adoption problem that more change management can address. If it's stable or increasing at week 12, the issue is deeper — either the robot's performance isn't reliable enough for operators to trust it in production conditions, or the application fit is wrong.
Kill threshold: More than 30% of operators routinely self-completing robot-assigned tasks at week 12, with no significant improvement trend from week 4.
Signal 5: The Application Has Been Narrowed to the Point Where ROI No Longer Closes
It's common for deployments to progressively narrow their scope as operators and vendors discover limitations: "let's not run it on product variant B because the pick rate is too low," "let's stick to single-floor operations for now," "let's not use it during peak hours because the floor is too crowded."
Each individual narrowing is reasonable. The aggregate effect is often that the use case has been reduced to the point where the robot's utilization and throughput no longer justify the cost. A robot that was supposed to run 8 hours across your full product range and is now running 4 hours on a subset of products generates half the labor displacement the ROI model assumed.
Kill threshold: Cumulative scope reductions mean effective working hours are below 60% of the ROI model assumption, with no credible path to restoration.
Signal 6: Two Software Updates Have Caused Production Regressions
Software updates are a necessary feature of modern robotics platforms. They also create operational risk. A vendor whose update process has caused two production regressions in a deployment — meaning the system performed worse after the update than before, requiring rollback or extended troubleshooting — has demonstrated an inability to manage change safely at your site.
This is a vendor process problem, not a technology problem. It may be solvable — vendors can improve their update testing and staging processes. But it requires a direct conversation with the vendor's engineering leadership, a formal change to update procedures, and a written commitment before you continue.
Kill threshold: Two unplanned production regressions caused by vendor software updates within a 6-month period, without a written vendor commitment to a changed update process.
Signal 7: Integration Maintenance Is Consuming More Than 15% of Your Technical Staff's Time
Custom software integrations require ongoing maintenance. A well-built integration should require minimal attention after the first 90 days — occasional updates when either connected system changes, periodic monitoring, and standard incident response.
If your technical staff are spending more than 15% of their time maintaining the robot integration — debugging, rebuilding connectors after updates, managing data quality issues — the integration architecture has a structural problem. This is a tax on your technical capacity that grows over time as connected systems evolve and the integration debt compounds.
At this level of maintenance burden, the economic case for the deployment changes materially even if the robot itself is performing adequately. The technical staff hours have a cost that isn't in the ROI model.
Kill threshold: Integration maintenance consuming more than 15% of a technical FTE's time at month 6+, trending flat or increasing.
Signal 8: The Vendor Has Been Acquired, Is Pivoting, or Has Lost Key Staff
Vendor stability matters. Your deployment's long-term performance depends on continued software support, parts availability, and engineering attention. These are at risk when:
- The vendor is acquired by a company with a different strategic focus
- The vendor announces a major product pivot that deprioritizes your deployment's platform
- The engineering team that built and knows your deployment's integration leaves
None of these are immediate kill triggers. But they require a specific conversation with the vendor about: continued software support commitments for your product version, parts supply guarantees, and the assigned support team stability. If the vendor can't provide satisfactory written commitments, the risk profile of your deployment has changed materially.
Kill threshold: Post-acquisition or pivot, vendor cannot provide written support commitments for your deployment timeline, and the business case relies on continued software development.
The Kill Decision Process
A clean kill decision requires:
A neutral analyst — someone who didn't champion the deployment reviews the data and makes the call recommendation to leadership. Finance is the natural owner.
A written kill memo — one page: what the data says (against each relevant kill signal), what the revised business case looks like, and a clear recommendation. This forces clarity and creates a record.
A vendor conversation — before executing, a direct conversation with the vendor's senior leadership (not the account manager) presenting the data and your assessment. This gives them one final opportunity to offer a substantive commitment. If they can't — or if the commitment doesn't address the root cause — proceed.
A post-mortem — after the kill decision is executed, a structured retrospective covering: what the original business case assumed, what actually happened, what the program team would do differently, and what organizational knowledge was built. This is the asset you retain from the deployment.
What Killing Is Not
Killing a deployment is not failure. It's accurate decision-making based on evidence.
The sunk cost is gone whether you kill at month 7 or month 14. The question is how much additional capital and organizational attention you spend between now and the inevitable conclusion.
The organizations that learn most efficiently from failed robot deployments are the ones that make clean kill decisions with documented reasoning, conduct genuine post-mortems, and apply the organizational knowledge to their next procurement. They're also the organizations most often cited as having successful subsequent deployments — not because they got lucky the first time, but because they extracted the learning from getting it wrong and used it.
A documented kill decision with a real post-mortem is worth more to your next robotics initiative than 14 months of optimistic extension.
This is the final article in the Deployment and ROI series. Start with the fundamentals: Why Most Robotics ROI Projections Miss by 40%


