Subject matter expert:
The traditional sequence — develop, then test, then automate — is one of the most expensive workflows in software engineering. Here’s what happens when you flip it.
A complex feature, 15 deploy-and-fix cycles, zero manual retesting — this is what a well-designed test automation strategy delivers in practice.
On a white-label banking platform we’ve been building and scaling for years — now running 60+ microservices and 900+ API endpoints — we replaced that sequence with something fundamentally different. We call it automation-first: the QA engineer writes automated tests in parallel with the developer writing the feature, not after it ships.
The results were dramatic enough that we think this approach deserves a much wider audience. Not as theory, but as something we’ve practiced across hundreds of features, production incidents, and release cycles in a regulated fintech environment.
Modern software platforms — especially in fintech — release code continuously, run dozens of interdependent services, and operate under strict regulatory constraints. Manual QA cannot scale to match this pace. A structured test automation strategy is the operational backbone that lets engineering teams ship faster without trading quality for speed.
Manual testing and automated testing strategy serve different roles in a healthy QA process. Manual testing excels at exploratory work — finding edge cases a script would miss, evaluating UX flows, or validating novel business logic. Automated testing excels at repeatability: executing the same 900 regression scenarios after every deploy in under 15 minutes, with zero human effort per run.
| Dimension | Manual Testing | Automated Testing |
| Execution speed | Hours per test cycle | Minutes for full regression |
| Repeatability | Varies by tester | Identical every run |
| Best use case | Exploratory, UX, new features | Regression, API, CI/CD gates |
| Cost over time | Grows linearly with coverage | Shrinks per-test as suite scales |
| Setup effort | Low initially | Higher upfront, lower ongoing |
The traditional testing workflow in most engineering teams follows a predictable rhythm: develop, test manually, automate if there’s a budget. Here’s what that rhythm actually costs. A developer picks up a ticket. They write the code, open a pull request, merge it. New code deploys to a test environment. A QA engineer reads the requirements, sets up test data, which in fintech means creating users, configuring accounts, seeding balances, sometimes triggering a KYC flow. They execute test scenarios manually. If something fails, they document it, file a bug, move the ticket back to the developer.
The developer doesn’t pick up the bug immediately. Hours pass. They read the bug report, ping QA on Slack. The QA engineer is busy with another ticket. They eventually connect, walk through the issue, spend time in logs. This cycle — what engineers call QA-developer ping-pong — can repeat five, ten, fifteen times on a complex feature. Each round costs 30 minutes to an hour of QA time for setup and verification alone, plus communication overhead, plus context-switching for both people.
Without a clear software test automation strategy, teams feel perpetually behind — because they are.
Automation-first doesn’t mean “automate everything before you write any code.” It means the QA engineer and the developer work in parallel from the moment a feature enters the sprint.
Sprint planning happens. The QA engineer sees which features are coming. They understand the business requirements, the API contracts being designed, the expected behavior.
Development and automation begin simultaneously. The developer starts building endpoints and business logic. The QA engineer starts writing automated test scenarios for the same feature, sometimes against placeholder endpoint names that get swapped for real ones once the code is deployed.
Sometimes the developer ships test endpoints first — just the API contract with minimal logic — and automation is built against those while full business logic is still in development.
The developer deploys. The moment new code hits the test environment, automated tests run against it. Results appear in minutes. If they fail, the developer sees exactly what failed — which test, which step, what the expected result was, what actually happened, and a direct link to the relevant logs.
There is no handoff. The ticket doesn’t move to “Ready for QA.” The developer doesn’t wait for someone to pick it up. The feedback loop is measured in minutes, not hours or days.
When the developer finishes the feature, I finish the automation. He deploys, the tests run automatically. If they fail, he sees why immediately. The ticket doesn’t go to “Ready for QA.” There’s no ping-pong. The developer gets instant feedback on whether the basic scenarios work.
The most compelling evidence for this approach came from a single complex feature — an end-to-end flow involving multiple microservices, third-party integrations, and intricate business logic.
The QA engineer wrote the automation in parallel with the developer. When the developer deployed the first version, the tests ran automatically. They failed. The developer checked the test report — step-by-step reproduction, expected versus actual results, log links attached to each step — and identified the issue on the backend. No Slack messages. No meetings. No waiting.
The developer deployed a fix. Tests ran again. Failed again — a different issue this time. Fixed. Deployed. Failed. Fixed. Deployed.
This happened more than 10 times. More than 10 deploy-and-fix cycles over the course of about a week.
Here’s the critical part: the QA engineer did essentially nothing during that week. No manual retesting. No setting up test data 15 times. No reviewing logs 15 times. No back-and-forth on Slack explaining how to reproduce the issue. The automated tests provided all the feedback the developer needed, completely self-service.
During that same week, the QA engineer was already writing automation for the next developer’s feature.
| ❌ Manual Workflow × 15 iterations | ✅Automation-First × 15 iterations |
|---|---|
| 1. Dev moves ticket → “Ready for QA” | 1. Dev deploys new version |
| 2. QA notices ticket (delay: hours) | 2. Tests run automatically (minutes) |
| 3. QA sets up test data (30–60 min) | 3. Dev opens report: steps, data, logs |
| 4. QA tests, reviews DB, S3, logs | 4. Dev identifies issue, fixes it |
| 5. QA moves ticket → “Test Fail” | 5. Dev deploys → back to step 1. |
| 6. Dev picks up (delay: hours) | |
| 7. Dev asks: “How did you reproduce?” | |
| 8. Joint debugging session on Slack | |
| 9. Dev fixes → deploys → back to step 1 | |
| ≈ 2–3 weeks to deliver | ≈ 1 week to deliver |
Each manual test cycle on this particular flow took 30 minutes to an hour — setting up test data, checking database state, reviewing S3 buckets, inspecting message queues, verifying logs. Fifteen rounds is 7.5 to 15 hours of pure QA execution time, plus communication overhead. Nearly two full working days consumed retesting a single feature.
With automation-first, that cost was zero. The feature shipped in one week instead of two to three.
Imagine if I had to manually retest 15 times — each time setting up test data, checking databases, reviewing logs — each round takes 30 minutes to an hour. That’s the entire week just retesting. With automation, the developer just deploys, sees the results, and fixes. I was free to work on other things. It saves a colossal amount of time.
— Victor Olkhovskyi
The team applies the same principle to production incidents and bug fixes, but in reverse.
When a bug is reported in production, the first step is not to fix it. The first step is to automate the failing scenario.
This might seem counterintuitive — shouldn’t you fix the fire first? But consider the alternative: a developer pushes a fix, and then QA is asked to verify it. But QA never saw the bug firsthand. They’re testing a fixed version. How do they know they’re reproducing the exact scenario that was broken? How do they know the fix actually addressed the root cause and not just a symptom?
Step 1: Production bug is reported. QA writes an automated test that reproduces the failing scenario. The test runs and confirms: yes, this is broken.
Step 2: Developer applies the fix and deploys.
Step 3: The same test runs again. If it passes, the fix is confirmed. If it fails, the developer sees exactly what’s still wrong — no ambiguity.
Ongoing: The test stays in the regression suite permanently. If this behavior ever regresses, it’s caught immediately.
By automating the broken behavior first — writing a test that reproduces the bug and confirms it fails — the team creates a reliable verification mechanism. There’s no “I think it works now,” no uncertainty about whether you’re testing the right thing.
If there’s a bug in production, we automate it first, then fix it. That way you know exactly what was broken. When the fix is deployed, you see immediately whether it works. Because how can you verify a fix if you never confirmed the broken behavior? You don’t know if you’re testing the right thing.
— Victor Olkhovskyi
One of the subtler but powerful effects of automation-first is how it changes the quality of communication between developers and QA.
In a traditional workflow, when a QA engineer finds a bug, they write a description in Jira. Even a well-written bug report requires the developer to interpret it, reproduce it in their environment, and sometimes ask clarifying questions. “What test data did you use?” “Which brand was this on?” “Can you show me the exact request you sent?”
With automation-first, the test report is the bug report. It contains:
A developer can open the Allure test report, see exactly what happened, click through to the logs, and start debugging — without ever pinging the QA engineer. No Slack thread. No screen-sharing session. No time wasted explaining something that the test report already shows clearly.
As one engineer on the team described it: when there’s automation, it’s step, step, step — everything is clear. Just open the report, see the problem, fix it.
The following automated testing strategies reflect what actually works across complex, production-scale platforms:
Not every scenario needs automation at the same priority. Start with the scenarios that are tested most frequently, have the highest business impact if they fail, or have the most complex setup that automation can eliminate. For a banking platform, these are typically: account creation and KYC flow, payment authorization end-to-end, balance ledger accuracy, and authentication/session management.
How to create test automation strategy that stays healthy long-term: treat tests as production code. Review them in PRs. Refactor them when they become brittle. Delete them when they test scenarios that no longer exist. A test suite that nobody maintains becomes a liability — false failures that desensitize the team to real alerts.
Automation-first doesn’t work as a QA-only initiative. On this project, several deliberate choices made the approach possible:
This all works when there’s collaboration between QA and developers, and when developers also think about how we’ll write automation. It’s an automation-first approach.
— Victor Olkhovskyi
For teams evaluating whether automation-first is worth the investment, here’s what we measured on this project:
Reduced from 2–3 weeks to approximately 1 week. The savings came almost entirely from eliminating manual retest cycles.
Fully eliminated. Every backend release is validated by automated regression that runs on deploy and daily across all services and brands.
Backend deployments went from once every 2–3 months to weekly — a 12–15× improvement. Each release contains isolated changes to a single microservice, making rollbacks trivial.
Infrastructure problems and deploy regressions are now caught in 15 minutes to 2 hours, down from half a day to a full day previously.
With basic scenarios covered by automation, QA engineers spend their time on edge cases, exploratory testing, and expanding automation coverage — not on repetitive manual verification.
The most common objection to automation-first is that you can’t write tests for features that are still changing. “Wait until the feature is stable,” the argument goes. “Otherwise you’ll just be rewriting tests constantly.”
This objection makes sense for UI automation, where interface changes can invalidate entire test suites overnight. But for backend API testing — which is where this approach delivers the most value — it largely doesn’t apply.
API contracts tend to stabilize early. The endpoint paths, request structures, and expected response formats are usually defined during design. Field names might change, validation rules might evolve, but the core test scenarios remain valid. Updating a test because a field was renamed from country to country_code takes minutes.
The cost of maintaining automation written in parallel is far lower than the cost of the manual ping-pong it replaces. Even if 20% of your tests need minor adjustments after a feature ships, you’ve still saved 80% of the manual verification effort — and you have permanent regression coverage going forward.
The real risk isn’t writing tests too early. It’s starting automation late, when the people who understood the feature are no longer available to inform the tests.
Our QA teams have implemented and scaled test automation strategies across platforms processing billions in transaction volume.
We assess your current testing workflow, identify the highest-value automation opportunities, and design a test automation strategy and roadmap matched to your team’s stack, release cadence, and compliance requirements.
We build automation frameworks from the ground up or take over and improve existing ones. Our frameworks are designed for the specific constraints of fintech: complex test data requirements (KYC flows, account structures, transaction histories), multi-brand architectures, and regulatory verification checkpoints.
For teams without in-house QA automation expertise, we provide embedded QA engineers who own automation coverage for your platform — writing tests in parallel with feature development, maintaining regression suites, integrating with your CI/CD pipeline, and delivering the self-service test reports that eliminate QA-developer ping-pong.
This is the same model we applied on the white-label banking platform described throughout this article.
Contact us to discuss how our approach could accelerate your team. Our engineers are available for a technical consultation.
Contact usROI shows up in three places: delivery speed (teams with mature automation deploy 10–15× more frequently), defect cost (bugs caught in CI cost roughly 10× less to fix than in staging, 100× less than in production), and QA capacity (engineers shift from repetitive verification to higher-value exploratory testing). On the platform described in this article, a single complex feature saved up to 15 hours of QA execution time in one iteration cycle alone.
The correct response to a flaky test is never to retry or ignore it — always investigate the root cause. Common causes are race conditions in async flows, shared mutable test state, and environment instability. Quarantine flaky tests in a separate suite while they’re being fixed so they don’t block CI or silently drop from coverage.
Start with scenarios tested most often and those where production failure costs the most. For fintech platforms this typically means: user registration and KYC flow, payment authorization end-to-end, balance accuracy after settlement, and authentication management. API tests on core endpoints deliver the highest early ROI — fast to write, stable to run, and immediately useful as CI gates.
Map your highest-risk and highest-frequency test scenarios first — these are your automation starting point. Select tools that match your existing stack, design a framework that separates test data from test logic, and integrate tests into CI/CD so they run on every commit. Treat test code as production code: review it in PRs, refactor it regularly, and let coverage metrics guide what to automate next.
Most onboarding playbooks skip infrastructure economics. In fintech, that oversight costs six figures.
Your fintech platform will outgrow manual testing. The question is whether you'll invest in automation…
The AI market for banking will reach $368 billion by 2032. But the majority of…
Industry estimates project the global AI in the BFSI market will grow from roughly $35…
Pursuing digital transformation, responding to market pressures, working on customer retention – modern businesses have…
Earlier this year, Kindgeek launched Easyflow - a comprehensive AI automation platform, designed to streamline…