OrionisOrionis
Back to blog
Articles2025-03-1022 min

The automation that fails most: lessons from 50 projects

FH

Fernando Hernández

2025-03-10

After automating over 50 processes for companies in Argentina, Uruguay, and the rest of LATAM, these are the mistakes we see repeating, the stories that keep us up at night, and the framework we use to make automations survive their first 90 days.

The pattern nobody wants to see

After implementing over 50 automations for companies in Argentina, Uruguay, Colombia, and Mexico, I can say with certainty that the problem is almost never technical. The automations that fail most aren't the ones that use complex technology or integrate difficult systems. They're the ones built on top of processes that nobody took the time to understand first.

There's a dangerous fantasy in the market: that automation is a shortcut. That you can take a chaotic manual process, layer technology on top, and it magically becomes efficient. It doesn't work that way. It never worked that way. And yet, we keep seeing companies fall into the same trap.

The hard data: of the 50+ projects we've executed, 35% of those that started badly did so because the client wanted to automate a process that wasn't even documented. Another 25% failed because nobody measured the current state before starting. And 15% died because nobody thought about what happens when the bot encounters a case it doesn't understand.

This article is a walkthrough of the mistakes we've seen, the lessons we've drawn, and the framework we developed to ensure automations don't just work in the demo but survive in production.

Of 50+ projects: 35% failed due to undocumented processes, 25% because the current state wasn't measured, and 15% because edge cases weren't planned for. The problem is almost never technical.

Mistake #1: Automating the broken process

This is the most common and most expensive mistake. A food distribution company in Buenos Aires called us to automate their order process. The flow worked like this: salespeople sent orders via WhatsApp to an admin, who entered them into an Excel spreadsheet, which was then manually uploaded to the ERP. They wanted a bot to take the WhatsApp messages and load them directly into the ERP.

When we mapped the process, we discovered the admin wasn't just transcribing — she was also correcting salesperson errors (wrong product codes, impossible quantities), validating stock by checking another spreadsheet, and calling the customer when something didn't add up. Automating the flow as-is would have generated a disaster of incorrect orders.

What we did: first, we redesigned the process. Salespeople switched to a structured form in a simple app (no more free-form WhatsApp). The form validates codes and quantities against the catalog in real time. Only then did we automate the ERP upload. The result: 94% reduction in loading time, and errors dropped from 12% to under 1%.

The lesson is brutal but simple: if you automate a broken process, you only make it break faster. You always have to redesign first.

If you automate a broken process, you only make it break faster. Always redesign first: in this case, switching from free-form WhatsApp to a structured form reduced errors from 12% to 1%.

Mistake #2: Not measuring the before

An accounting firm with 80 employees asked us to automate bank reconciliation. 'It takes us a huge amount of time,' they told us. When we asked how much exactly, the answer was 'a lot.' They had no metrics.

We insisted on measuring before touching anything. Result: reconciliation consumed 340 hours/month spread across 8 people. The real cost was USD 5,100/month (factoring in salaries and overhead). The error rate was 3.2%, generating rework that added another 45 hours/month.

With those numbers, we were able to build a solid business case, prioritize which reconciliations to automate first (highest volume, lowest complexity), and project a realistic ROI. Post-implementation: the 340 hours dropped to 40 (those requiring human intervention for exceptions), errors fell to 0.1%, and the ROI was 680% in the first year.

Without the initial measurement, none of this would have been possible. We wouldn't have known where to start, we couldn't have justified the investment to the board, and we'd have no way to demonstrate impact afterward. If you can't measure the process before automating it, don't automate it.

If you can't measure the process before automating it, don't automate it. In this case, measuring revealed 340 hours/month of hidden cost and enabled achieving 680% ROI in the first year.

Mistake #3: Ignoring edge cases (or underestimating them)

An insurance company hired us to automate claims intake. The main flow was clean: the insured party reports via web, the claim is classified, an adjuster is assigned, and the claim is processed. We automated everything in 6 weeks and the pilot was flawless.

Two weeks into production, the problems started. Claims involving multiple vehicles that the system didn't know how to handle. Reports where the insured wasn't the driver. Cases where the same claim was reported twice through different channels. Claims filed against a third party's policy. Each of these edge cases represented less than 2% of the volume, but combined they accounted for 18% of total cases.

The operations team ended up with more work than before, because they had to resolve the cases the bot loaded incorrectly on top of the ones it processed correctly. The automation created more operational load in its first weeks.

The solution: we implemented a 'smart escalation' system. When the bot detects a case that doesn't fit known patterns, it flags it, extracts whatever information it could process, and escalates to a human operator with context. The operator resolves it, and that resolution feeds the model so it can handle the case on its own next time. Within 3 months, edge cases requiring intervention dropped from 18% to 5%.

Mistake #4: The 'let's automate everything' syndrome

This is a classic we see in companies where a C-level got excited about AI after a conference. They arrive with a list of 15 processes to automate and want to start them all at once. It's the perfect recipe for failure.

A logistics company in Montevideo wanted to automate: invoicing, shipment tracking, customer service, claims management, client reporting, freight coordination, commission settlement, and inventory control. All at the same time. With an IT team of 4 people.

What we did: we said no. We proposed starting with ONE single process — the one with the highest volume, lowest complexity, and greatest visible impact. We chose invoicing. In 8 weeks we automated it, stabilized it, and the IT team learned to maintain it. Only then did we move to the second process.

18 months later, they have 6 of the 8 processes automated, running in production, with clear impact metrics. If we had started them all at once, I'm convinced they'd have none of them working well today.

The rule: one process at a time. Two at most if they're independent and you have sufficient staff. Automation is a muscle you train, not a switch you flip.

Mistake #5: Not involving the team that does the work

A digital bank asked us to automate the corporate client onboarding process. We worked with the technology team and the operations manager. We designed a technically impeccable solution. When we put it into production, the onboarding analysts hated it.

Why? Because nobody asked them how they actually did their work. The documented process said one thing, but in practice the analysts had developed shortcuts, informal validations, and personal criteria that weren't in any manual. The automation ignored all of them.

We learned the lesson the hard way. Now, before automating any process, we spend 2-3 days sitting alongside the people who execute it. Not reading documentation — watching. Asking 'why do you do it this way?' and 'what happens when X?' 90% of the critical information for a good automation lives in the heads of the people who do the work, not in procedure manuals.

The added benefit: when the team participates in the design, adoption improves radically. They go from being 'the ones the bot is going to replace' to 'the ones who designed how the bot works.' And that changes everything.

Mistake #6: Automating without monitoring

This sounds basic, but we see it all the time: companies that implement an automation, leave it running, and never look at it again until it blows up. It's like putting a new employee on the job and never supervising them.

A case that stung: an e-commerce company automated price updates from their ERP to their online store. It worked perfectly for 3 months. One day, an ERP error sent zero prices for 200 products. The bot dutifully updated them. They sold 47 products at $0 before anyone noticed.

Since then, every automation we deploy includes a non-negotiable monitoring stack: anomaly alerts (unusual volume, out-of-range values, error rates), a real-time performance dashboard, a complete execution log with traceability, and automatic circuit breakers that pause the automation when something doesn't add up.

Monitoring is not a nice-to-have. It's part of the cost of automation. If you can't afford it, you're not ready to automate.

The framework: Measure, Redesign, Automate, Monitor

After all these lessons, we developed a framework that we apply to every project. It's not rocket science — it's systematized common sense. But it works.

Measure: before touching anything, we measure the current process. Time per execution, frequency, error rate, real cost (including overhead), and team satisfaction. This gives us the baseline to calculate ROI and prioritize.

Redesign: with data in hand, we analyze the process. Does it have unnecessary steps? Can it be simplified? Are edge cases identified? Is the information flow logical? We redesign on paper first — no technology — until the process makes sense.

Automate: only now do we bring in technology. We choose tools based on the case (RPA, API integrations, AI-powered workflows, or a combination). We implement in phases: first the main flow with the most common cases, then we progressively add edge cases.

Monitor: we deploy the observability stack from day one. We define clear KPIs (execution time, success rate, volume processed, exceptions), configure alerts, and review performance weekly during the first month, biweekly during the following two.

Each phase has a checkpoint with the client where we decide whether to proceed, adjust, or pivot. There's no commitment to 'automate X thing' — there's a commitment to 'improve Y metric.' That completely changes the project dynamics.

Framework: Measure, Redesign, Automate, Monitor

Measure

Time per execution, frequency, error rate, real cost, team satisfaction

Redesign

Eliminate unnecessary steps, simplify flows, identify edge cases — all on paper first

Automate

Choose tools (RPA, API, AI), implement in phases starting with the main flow

Monitor

Clear KPIs, alerts, weekly review (month 1), biweekly (months 2-3), client checkpoints

Real ROI: the numbers we can actually show

I'll share real (anonymized) numbers from projects we completed in the past 18 months.

Project 1 — Bank reconciliation (accounting firm, 80 employees): Total investment USD 35,000. Annual savings USD 61,200. First-year ROI: 175%. Payback period: 7 months.

Project 2 — Invoice processing (distribution company, 200 employees): Total investment USD 28,000. Annual savings USD 89,000. First-year ROI: 318%. Payback period: 4 months. Bonus: they eliminated 2 days of delay in the collection cycle.

Project 3 — Client onboarding (fintech, 120 employees): Total investment USD 52,000. Annual savings USD 145,000. First-year ROI: 279%. Payback period: 5 months. Bonus: new client NPS increased by 23 points.

Project 4 — Support ticket classification (SaaS company, 60 employees): Total investment USD 18,000. Annual savings USD 42,000. First-year ROI: 233%. Payback period: 6 months.

The pattern is consistent: well-executed automations pay for themselves in 4-7 months. The ones done poorly... well, that's why you're reading this article.

An important note: these numbers include our fees, tool licensing costs, and the client team's time investment. They're not inflated numbers — they're what actually happened.

Comparative ROI from 4 real projects

Bank reconciliation

Investment USD 35K → Annual savings USD 61K → ROI 175% → Payback: 7 months

Invoice processing

Investment USD 28K → Annual savings USD 89K → ROI 318% → Payback: 4 months

Client onboarding

Investment USD 52K → Annual savings USD 145K → ROI 279% → Payback: 5 months

Ticket classification

Investment USD 18K → Annual savings USD 42K → ROI 233% → Payback: 6 months

How to know if you're ready to automate

Before contacting any consultancy (including us), ask yourself these questions:

Can you describe the process in a paragraph? If you can't explain it clearly, it's not ready to be automated. Document it first.

Do you have metrics for the current process? If you don't know how long it takes, how often it runs, and what the error rate is, start by measuring.

Is the process stable? If it changes every two weeks, automating it is throwing money away. Wait until it stabilizes.

Do you have someone who can oversee the automation? You don't need a full-time engineer, but you do need someone who reviews alerts and knows what to do when something fails.

Is the team on board? If the people doing the work see automation as a threat, it will fail. Involve the team from day one.

If you answered 'yes' to all five, you're ready. If you answered 'no' to any of them, that doesn't mean you can't automate — it means you have groundwork to do first. And that groundwork is just as valuable as the automation itself.

At Orionis we do both: we help you prepare the ground and then we automate. If you want to assess your processes, write to us at [email protected]. The initial conversation is free and no-commitment.

5 questions before automating: (1) Can you describe the process in a paragraph? (2) Do you have current metrics? (3) Is the process stable? (4) Is there someone who can oversee it? (5) Is the team on board? If any answer is 'no', there's groundwork to do first.

Compartir:
Next step

Got a processto automate?

Answer 5 quick questions and get a cost and timeline estimate instantly.

No commitmentInstant response