Hello and welcome to this first blog post!

To set the tone of the future articles, let’s define what this blog is about: software performance. To be more precise: software performance from the point of view of a software designer and developer. The subject should be of interest if you want to get more insight about performance testing, performance tuning and troubleshooting. A lot of different concepts can hide behind the word performance, so let’s clarify it.

Performō, performare… performance

If we open a dictionary and look at the definition of the word performance:

how well a person, machine, etc. does a piece of work or an activity

Cambridge dictionary

This short definition carries two very important pieces of information that will be the focus of this blog:

  • Performance is key, as it relates to the quality of our software, and how the software fulfills what is expected from it
  • We just need to measure “how well” the software does its job and we’ll know how it performs.

But how do we define this “how well” ? What is good and what is not good enough ? The key element here is that performance can be a very subjective topic. Not everyone judges by the same criteria, and even on a given criteria not everyone has the same expectations.

If we set aside the software industry for a second, we talk about performance in multiple contexts:

  • The performance of a sports club or racing cars
  • The performance of an employee in a company
  • The performance of an actor in a movie

The common point between these three examples is that in all cases performance is a positive quality and a key element to judge if the sports club or the actor are worth watching, and if the employee is worth keeping. But there is a key difference between the first example and the third one: you can assess easily the performance of a sports club or a racing car because a reading grid was put in place to assess their performance. A championship is organized with clear rules, and competitors are ranked based on these rules. The leader is recognized as the most performant.

For an actor, there is no clear rule to assess who is acting better, fueling ongoing debates. The performance reviews in the companies are in my opinion the best example. Most companies try to rank their employees (more or less officially depending on the company policy) when it is time to give raises, but very few have metrics or KPIs to measure the performance objectively, either because they are difficult to put in place, do not make sense for the employee’s role, or because the employees in the same role may work on something very different that cannot be compared.

But at least, they have understood two main elements that we can apply to software performance:

  • The easiest way to assess is to compare against a reference
  • The more metrics and KPIs you have, the easier it gets to assess performance

The importance of metrics

What is the highest-performing car ?

  • A Formula One
  • A rally car
  • A dragster

If you put a Formula One on a mud track, it will be slower than a rally car, and if you put a rally car on a 10seconds race against a dragster it will look ridiculously slow. That highlights that we need to define the context and what is expected from our software to know if it is effectively performing well.

Usually, when we talk about performance we talk about:

  • Response time and latency: time between the start of an operation/request and its response
  • Throughput: number of operations processed in a given time frame
  • Scalability: the ability to maintain response time and throughput when the load of the system increases
  • Resource utilization: CPU/memory/network/IO

The ISO25010 defines:

performance efficiency: the degree to which a product performs its functions within specified time and throughput parameters and is efficient in the use of resources (such as CPU, memory, storage, network devices, energy, materials…) under specified conditions.

ISO 25010 standard

Those metrics must then be defined for each interaction with the system, and not as a single metric for the whole software. For example, the latency when trying to download a file is usually lower than when uploading a file. It may be acceptable for a rarely executed operation to consume more resources than a very frequent request.

Is performance the pinnacle ?

We have defined that performance represents how well our software works and that a minimum level of performance is required to reach an acceptable user experience.

If performance is a must have, why don’t we all drive formula one or rally cars ? Because there are other characteristics to consider. For cars, people drive family cars because it is cheaper, easier to maintain, and because people do not need this level of speed.

For software, other characteristics can be equally important. As for cars, one of the main drivers is cost, but there are plenty that can require trade-offs with performance. A few of them can be:

  • Security: the system may require to do additional security checks that make the system slower
  • Consistency: there is no point being nanoseconds fast if the system delivers outdated data
  • Maintainability: you may need complex architectures and development to reach optimal speeds, but then people have to maintain this system


One of the key elements is to have someone capable of defining with clear requirements what the minimum acceptable performance level is for the software. Once these requirements are defined and communicated, everyone can start working on how the software should be designed, tested, monitored, tuned to reach this level of performance. In one of the next articles, we will see how these requirements can be defined. Stay tuned!


Discover more from The Perf Parlor

Subscribe to get the latest posts sent to your email.

Posted in

Leave a comment