site_logo

How the SimpleOne Low-Code Platform's Performance Grew 16x: Architecture, Practices, and Metrics

18 November 2025

updated at: 18 November 2025

The performance of Low-code platforms is a hot topic in the enterprise world. On one hand, interpreting configurations on the fly adds some overhead compared to pre-built solutions. On the other, it gives businesses the incredible flexibility to change their processes quickly. As we started working with some of the largest customers out there, we made performance our strategic priority. We're still pushing the platform's limits to meet the demands of big companies. Since 2023, SimpleOne's performance has grown from handling 68,000 to one million user requests per month — a result of systematic work on the platform's architecture.

Here, we want to share that journey with you and talk about the architectural decisions that allowed us to increase the performance of our Low-code platform.

Why Performance is a Deal-Breaker for Enterprise Low-Code

Companies are turning to Low-code platforms to automate their business processes faster and to be less dependent on developers. But there's a trade-off with this approach: the platform is essentially executing code "on the fly," at runtime. The system has to interpret all the settings, business rules, and user scripts as it's running, instead of compiling them ahead of time. This kind of architecture naturally creates more load than a standard, off-the-shelf solution, and that's something you have to keep in mind when choosing your automation tools.

«Even with the built-in limitations of Low-code platforms, they offer businesses amazing flexibility to change processes on a dime. Our job, as the platform developers, is to make sure that this flexibility doesn't come at the cost of performance»

Technical Director at SimpleOne

For a large business, a slow automation platform isn't just an annoyance; it's a critical problem. When a system is sluggish, employees can't get their work done on time. It doesn't matter if they're handling support tickets, approving legal documents, or managing purchases. The very automation that was supposed to make things faster actually becomes a roadblock.

From what we've seen with our clients, this problem hits hardest during peak hours — usually between 10:00 AM and 3:00 PM. That's when most employees are actively working in the system, creating requests, updating tasks, and running reports. The load can spike dramatically, and a platform that isn't built to handle thousands of users at once can start to crumble. Users see pages that take forever to load, the system freezes up, and data processing errors start to creep in, which can throw off entire business processes. In the worst-case scenarios, automated workflows can come to a complete halt.

In the end, poor performance can wipe out all the benefits of automation and make your investment in the system worthless. A company can pay for licenses, spend months on implementation, and train all its employees, only to end up with a tool that slows them down instead of speeding them up.

Large companies understand this. When they're choosing a corporate Low-code platform, they put performance on the same level of importance as features. In the enterprise world, it's standard practice to run detailed load tests during the pilot phase. The SimpleOne team works hand-in-hand with our customers' IT departments to simulate real-world usage, measure response times under pressure, and see how the system behaves in stressful conditions. The final decision to go live is only made after we've proven the platform can handle the expected amount of data and number of users.

How We Measure the Performance of Our Low-Code Platform

In traditional software development, the term "performance" is most often associated with technical metrics like requests per second (RPS), operation execution time, and CPU/memory utilization. However, in the context of Low-code platforms, such indicators lose their standalone significance.

The reason is the very nature of Low-code. It's not just a set of REST APIs; it's an entire environment for running business logic through platform-level configurations like forms, business rules, and scripts. So, the main unit of "load" isn't a single technical request anymore; it's a full business scenario. In our case, that's a completed user request submitted via the platform.

For example, in an ITSM system built on a Low-code platform, a single business scenario might involve:

  • Authorizing a user and loading their permissions;
  • Displaying a request form (with dynamic fields, auto-calculations, etc.);
  • Creating the request and kicking off the workflow;
  • Generating and routing a task to the right person;
  • That person's actions (assigning, completing, and reporting);
  • Closing the process, sending notifications, and logging everything;
  • Calculating SLAs and other business metrics.

Basically, one business scenario can trigger dozens of internal technical operations all across the platform: the interface, the process engine, the database, the notification module, and so on.

Based on our experience, we've come to see that the most important metric for performance is the number of business scenarios processed in a given time, not just RPS or user count. For example, telling a company the system can handle "80 completed user portal requests per minute with a 95% SLA" is a concrete, reproducible number that means something to them.

The Stages of SimpleOne's Performance Growth: Technologies and Results

In 2023, we started a systematic project to boost our platform's performance. We were seeing more and more large-scale customers looking for local automation solutions, and we needed to meet their high standards. We had to prove to ourselves and to them that a Low-code platform could truly handle enterprise-level work. Here's the story of how we increased SimpleOne's performance by 16 times in just a year and a half.

It's worth mentioning that we did all our performance testing on a setup that mimicked a real client's logic. These were the kind of business processes a client would actually configure to automate their own unique scenarios. This made the performance requirements — and the testing itself — even more demanding.

Starting point (August 2023)

  • Initial architecture: A monolithic system with a single database for everything. All parts of the platform — the web interface, the API, background processes — hit the same PostgreSQL database for both reading and writing data. For any operation, the system would first write the data, then immediately read it back to display to the user.
  • Performance metrics: Our first load tests showed a peak performance of just around 6 requests per minute. After digging into the results, our system team put together a plan to scale the system and implement a pattern called CQRS.
Performance metrics graph
Performance metrics graph
  • Main limitations: The database was the single bottleneck for the entire system. Our application servers had plenty of spare CPU and memory, but PostgreSQL was running at its absolute limit. Every request to read data was blocking resources needed to write new data.

Phase One — Horizontal API Scaling and DB Read/Write Separation (May 2024)

Technological changes:

  • We began routing all read requests (like looking up information) to our database "replicas" (copies), leaving the main server to handle only the write operations (like saving new data). This means commands that change data are now processed separately from queries that just read it. The only exception is for reads that happen inside a transaction — those still go to the main database to make sure the data is always consistent.
  • We changed how we used our database replicas. Instead of just having them on standby for failover, we switched them to active use to help distribute the load. Now, the master database takes all the write operations and syncs the changes out to the replicas. We set this up to be automatic, with very little delay.
  • We also gave our system's nodes different jobs. We set up dedicated servers specifically for handling read and write operations. All the read-heavy tasks — like generating reports, loading lists of requests, and searching the knowledge base — were sent to the replicas. The master database is now used only for writing new data and updating existing records, freeing it up significantly.
  • We scaled our API by adding more database replicas, which allowed us to spread the read load across several servers.

Practical result: We hit a new peak performance of about 41 user requests per minute, which was a ~580% increase from where we started. Page response times dropped by an average of 60%, and users stopped complaining about delays when running reports.

«During the load test, we kept adding virtual users, starting with 100, then 250, until we hit a plateau. After adding more virtual users, the system's performance began to degrade, so at that stage, we recorded 460,000 requests as the maximum»

Technical Director at SimpleOne

Why it worked: We took the pressure off our main database and got rid of the biggest bottleneck. Now, reading and writing data weren't fighting each other for resources. The system can now simultaneously process new requests and generate analytical reports without any mutual impact on performance.

Phase Two — Adding PgBouncer and Optimizing the Monolith (December 2024)

Technological changes:

  • After a series of load tests, we pinpointed some new bottlenecks in our system's architecture. Even though we had implemented CQRS and replication, we found that performance was now hitting a wall because of limitations in how we were handling database connections.
  • We implemented PgBouncer, which acts as a connection manager between our application and PostgreSQL, keeping a pool of connections ready to go. We also did a lot of work on our caches: we set up multi-level data caching, fine-tuned how long different types of data stay in the cache, and started pre-loading frequently used data into memory to speed things up.
  • We also broke our system's components down by their specific jobs to allow for more precise scaling. We set up dedicated servers for handling user HTTP requests, background tasks, and job schedulers. Then, we configured a load balancer to distribute traffic between all these different components.

«The most important lesson we learned over this year and a half is that you have to measure your system under a real load, not just guess. When we were looking at the PgBouncer problem, it wasn't obvious. Our servers weren't maxed out, but performance just wasn't growing»

Technical Director at SimpleOne

Practical result: We hit a new high of about 97 user requests per minute, a ~136% jump from the last phase. The system became much more stable during peak hours, even with several thousand support agents working at the same time.

Why it worked: We got rid of the hidden strain of constantly opening and closing database connections and optimized our memory usage through caching. This also means we can now add more power exactly where it's needed, without overpaying for resources we don't use.

System performance graph during a load test in a customer's production environment
System performance graph during a load test in a customer's production environment

The Overall Result

16-fold increase in performance in just a year and a half — from 68,000 to over 1,000,000 user requests per month.

Result confirmation: The results were validated through independent testing by the customer using a similar methodology.

Interim monitoring: We had regular quarterly meetings with the customer to track our progress and adjust our plans. Every stage came with a detailed report on what we had achieved and what we planned to do next.

The Importance of Smart Customization

On top of our own architectural improvements, we found another huge factor that affects performance: the quality of the Low-code configurations our clients build themselves. The platform is flexible and lets you build business processes in many different ways, but not all of those ways are equally efficient.

We did an audit of our customer's Low-code configurations and found some bottlenecks in their critical business scenarios. It turned out some of their processes weren't set up optimally: they had too many database calls inside loops, inefficient filtering rules, and duplicated logic in different places.

After we helped them optimize their business logic — without changing anything in the platform's architecture — the performance of their critical scenarios jumped by more than 15%. This was a great reminder that it's not just about having a fast platform; it's also about using it in a smart and professional way.

Future Plans: Reaching the Next Level of Performance

Our next step is to implement event-driven approaches. This will help us use our resources even more efficiently by processing events asynchronously. Instead of doing everything in real-time, the platform will handle many tasks in the background, freeing up the user interface and spreading the load over time. Modern orchestration technologies will also ensure system elasticity and help the system scale automatically based on our own algorithms and metrics.

Summary

Over a year and a half of dedicated work on SimpleOne's performance, we went from handling 68,000 to one million user requests per month — a nearly 15-fold increase. This was only possible because of our systematic approach to improving the platform's architecture.

The main improvements were:

  • We implemented elements of CQRS to separate read and write operations architecturally;
  • We started actively using database replication to distribute the load;
  • We implemented PgBouncer to efficiently manage our database connections;
  • We optimized our multi-level data caching;
  • We separated system components to allow for more targeted scaling.

The platform has proven that it can handle enterprise-level loads and deliver response times that corporate users expect. Its ability to scale horizontally means the system can grow right alongside our clients' businesses. They don't need to switch platforms when they hire more people or expand their processes; they just need to add more computing power where it's needed.

These results have been confirmed by independent load testing on a large customer's own infrastructure. We're continuing to make architectural improvements, and every update we release is based on analyzing real-world loads and listening to feedback from our enterprise-level clients.