Having recently completed a round of performance testing on the LabVantage 8.4 platform, we thought now was a good time to talk about how to understand LabVantage technical specifications.

Also – you can use some of this knowledge to turbocharge your existing LIMS!

LabVantage & Performance Testing: A Quick History
In May 2018, LabVantage hired a 3rd party to conduct performance testing on our LabVantage 8.3 software. In November 2019, LabVantage improved those test scripts to run them against our newer LabVantage 8.4 software.

What in the World is a BCU?
Coming from a technical and social world abounding in acronyms, LabVantage still chose to create one of its own! (It’s true: You Can Never Have Enough Acronyms – or YCNHEA!) The Base Computing Unit (BCU) is an abstract computing unit. One BCU corresponds to the following resources:

  • Processor: 4 Cores
  • Memory: 16 GB RAM
  • Disk: 50 GB minimum
  • Network: 100 Mbps NIC minimum

So why did we go to the trouble of creating a new measure of computing resources? Two reasons: commonality and adaptability.

The nice thing about measuring computing resources in BCUs is that it is a common denominator. BCUs are general enough to be conveniently universal – representing resources across vendors, operating systems, and hosting types (on-prem vs. cloud, for example).

The BCU is also adaptable. It’s elastic enough to reflect inevitable improvements in technology over time. In other words, one BCU should perform differently on future computing resources in the same way one unit of currency might purchase goods differently a year from now.

Infrastructure & Tools
We took an adaptive approach to building infrastructure, slowly added computing resources until we ended up with significant results across four distinct architectures. Our smallest test was a 2-BCU, single-tier LabVantage LIMS system and our most robust platform was a 6-BCU three-node, load balanced cluster across two computing tiers.

Our goal wasn’t just to understand performance on a specific architecture, but across architectural boundaries as the system scaled both vertically and horizontally (more on that below).

For software, we used Apache JMeter installed across a cluster of smaller VMs and orchestrated by a JMeter master controller.

Designing a Performance Testing Scenario
We recognized from the start that not all operations within a software system would use the same amount of computing resources. While our testing scenario was intended to execute rudimentary tasks, we knew that more advanced tasks would certainly require more computing resources. Therefore, our automated testing used standard LIMS actions.

For example: log in, navigate the menu, search for a product, log a sample, add testing, eSig the record, enter results, review the results and sample, approve the sample, etc.

We established a realistic execution time for the above steps, which was estimated to be about 3 minutes for an experienced user who is executing work at a standard pace, and where all data were immediately available to the user for entry. We also added appropriate delays in the automated user interaction to mimic a real-life person who would stop and think, consult something offline, and perform other ancillary actions.

The Results Are In: What We Learned
We have a comprehensive white paper available to customers describing the details of our performance testing, but here’s a summary of what we learned (some of which you may be able to use in-house):

1.       While clustered environments are necessary to provide for high availability, vertical scaling – as opposed to horizontal scaling on the cluster – delivers faster response times and a higher number of simultaneous users.

2.       Tuning the application server’s java settings may significantly optimize performance:

  • Adjusting the size of the JVM heap (-Xmx) and stack (-XX:MaxMetaspaceSize) to the right proportions of the available memory and CPUs.
  • Adjusting EJB thread count and the data source thread pool to the database location/configuration and the right proportions of the available memory and CPUs.
  • Adjusting the number of web threads to the right proportions of the available memory and CPUs.

3.       Monitoring the real-time performance of memory and CPU is the primary indicator of when a system may need more computing resources.

4.       The accurate settings for each system configuration are dependent on customer configuration, customizations, interfaces, data volume, etc. If customer-specific load testing is not performed, monitoring should be performed during acceptance testing and reviewed prior to go live with the Support team.

The same monitoring should continue during the first few months of go-live and be adjusted as required. If additional sites are added or the load changes, it may impact the initial settings.

5.       When running LabVantage for a production system, two BCUs should be the minimum specification for the database tier.

6.       Additional BCUs will be required for larger systems, or systems performing more intensive operations such as complex calculations or intensive scheduled events.

Final Thoughts and Recommendations
To wrap up, we know the BCU concept is unique to LabVantage. We also know that it is a very useful way to understand how computing resources are used to support a target number of users…and to supercharge your LabVantage system!

For now – and based on our most recent performance testing results – 80 simultaneous users per BCU is a conservative but reasonable estimate – acknowledging that certain systems will handle many more users, while heavily clustered systems may perform under this benchmark. Note, however, that systems performing more intensive operations may require additional resources.

Using this rule of thumb, the following general recommendations are made when scaling computing resources for a simultaneous number of LabVantage users.

(Do you remember the top of this blog post mentioning how BCUs are elastic over time as computing power increases? LabVantage has doubled the number of simultaneous user connections we support since the last time we ran performance testing!)

Are you a current customer or active LabVantage prospective customer? Contact us for an Executive Summary of our most recent performance testing results and recommendations.

Resources: