If your product's only problem is the rapidly growing number of users, you'd probably jump up and down than fuss about it.
But as you know, it can cause serious performance issues such as servers slowing down - a problem which you’ll have to face sooner or later.
Of course, you can “enjoy the moment” of increasing revenues (usually, more users = more money) for a while, but afterward, these problems must be solved like any other issue.
What if they won’t be solved?
Be prepared for higher server expenses, and if things go really wrong, for your users being taken over by your market competitors.
But don’t worry - you’re not left alone to deal with this issue. Below, you’ll find a few tips on optimizing and scaling your Meteor project to save your budget and time.
Performance Monitoring
Before you start optimizing, you first need to know what actually needs to be optimized. Fortunately, there are ready-made tools that can help with doing it properly:
Application Performance Monitor (APM) - a tool allowing you to monitor and investigate issues with every part of your system. APM strives to detect and diagnose complex application performance problems.
When it comes to Meteor apps, in addition to general statistics like memory usage and connected sessions, APM provides information about Meteor methods and publications.
You have several options for APM services to use with Meteor in your project at hand.
Monti APM- An independent service for monitoring your system. Monti has a free plan for developing with 8 hours of data retention and is cheaper in comparison to Meteor APM.
However, APMs (Application Performance Monitors) are not tools designed only for Meteor-specific applications.
Below you’ll find a list of a few suitable APMs for other technologies also:
Monti APM provides you with a set of tools to monitor various parameters of your system in detail. Among other things, it has a general dashboard, detailed statistics for publications and the methods used, and monitors the errors occurring both on the backend and frontend in the user's browser.
The possibilities are much greater - in this article, you’ll find only a few of them to give you some context. For more information, check the Monti APM documentation.
To start collecting data and analyzing your statistics, you first need to create an account, choose a suitable plan and add the Monti APM package to your project with the single line of module configuration containing the API key.
Find a method that needs improving
Let’s try to use the service in real-life examples to see how you can improve your product. Some methods can be really time-consuming - it would be wise to allocate enough time for checking them out:
Go to the detailed view under the Methods tab.
Sort the Methods Breakdown by Response Time.
Click on a method name in the Methods Breakdown. Assess the impact if you improve the selected method.
Look at the response time graph and find a trace.
Improve your method if you feel it is the right moment to do so.
Not every long-performing method has to be improved. Take a look at the following example:
methodX - mean response time 1 515 ms, throughput 100,05/min
methodY - mean response time 34 000 ms, throughput 0,03/min
At first glance, the 34 seconds response time can catch your attention, and it may seem that the methodY is more relevant to improvement. But don’t ignore the fact that this method is being used only once in a few hours by the system administrators or scheduled cron action.
And now, let’s take a look at the methodX. Its response time is evidently lower BUT compared to the frequency of use, it is still high, and without any doubt should be optimized first.
Optimization examples
During a performance test, you may notice that some parts of the app didn’t perform well enough. Below you’ll find a few examples of how to make it better.
Please, keep in mind that these are only examples of how to read and use the performance test data to find problems and come up with possible solutions. You may find some issue described here to be irrelevant to your project, and the cause of the problem may be somewhere else.
It’s also absolutely vital to remember that you shouldn't optimize everything as it goes. The key is to think strategically and match the most critical issues with your product priorities.
Low observer reuse
The subscriptions created in Meteor can cache results and reuse them to skip reading the database if the returned cursor is the same (has the same parameters, i.e., query, sort order, etc.). With this knowledge, you can plan to develop your subscriptions to share as much data between fetching as possible.
The app has a user-notifications subscription that sends global and user-specific notifications simultaneously to every user connected to the app. There’s a lot of data with different types of messages that are fetched for every logged-in user.
The point here is that this publication has the observe reuse on a 7% level, which means that it can reuse cached results only in 7% of cases.
Here’s how it looks like in the code:
[@portabletext/react] Unknown block type "code", specify a component for it in the `components.types` prop
It means that most of the users fetch a documents list that differs a little, which leads to the fact that you cannot use the previously retrieved information again for other users.
Here is a solution proposal. Let’s divide this publication into two smaller pieces:
User-specific:
[@portabletext/react] Unknown block type "code", specify a component for it in the `components.types` prop
And global for everyone:
[@portabletext/react] Unknown block type "code", specify a component for it in the `components.types` prop
Now, you have one subscription with 100% (global scoped) observer reuse and another with 7% (user scoped).
Now, you have one subscription with 100% (global scoped) observer reuse and another with 7% (user scoped). The second one has a numerically equivalent result, but it also has fewer results to be fetched by every single user. Global notifications count is much higher and can be reused for everyone from now on.
Heavy actions optimization
Let’s consider a scenario, in which the app needs to send thousands of emails at the same time every day to remind the clients of our restaurant services and that tomorrow they have booked a reservation.
At the start of 2021, the team was pushing 200 emails sent individually over 24 hours, which is fine. But then, gradually, after a few months, the number of emails increased to around 10 thousand, and the system was overloaded because the mailing procedure was using up all the resources.
What solution would you propose here?
First, you can increase the available resources (vertical scaling). Such a move will immediately increase the server costs but it will also solve the problem, right? Not exactly. What if in the future the number of emails needed to be sent will increase? Are you prepared to pay even more? If not, you must dive into the code and investigate to find a more suitable solution.
The second simple option is to divide the emails being sent into batches and perform the task in 3 hours (instead of 15 minutes of intense struggle). Depending on the mailing complexity, you can choose several implementation ideas, from the simplest in the memory array to some third-party queue system with Redis or RabbitMQ.
Unpredictable loading time
Consider an example of a work time management system for beauty salons. As you might know, beauty salon staff usually work in shifts. How many work shifts come to your mind? Morning, afternoon, and night, right? The client and the team thought the same and designed a publication with such user experience in their web app.
But one day, one of the beauty salons reported the issue with the main page taking too long to load. Can you guess what caused the problem? They created 3.5 thousand work shifts covering every day for the next two years, and each one of them was loading at once simultaneously.
The solution was simple: not publishing all documents at once, so the development team's job in this case was to monitor and react to incoming issues on an ongoing basis.
And that’s the root of many performance problems - the issues that arose due to not considering the consequences.
Database indexes
Let's say you want to fetch a list of orders in an online shop from a given category and day. It’s natural that with such a number of documents in your Mongo database processing the request could be slow.
Did someone say “SLOW”? - MongoDB indexes come to the rescue!
Indexes are unique data structures that store only a small subset of the data held in a collection’s documents, with saved references to the original ones in the origin collection. The data in the database is initially sorted and filtered through passed parameters, so that your consecutive searches could be faster.
Mongo no longer has to analyze each document to see if it meets the requirements.
In the described case, the index could look like this:
[@portabletext/react] Unknown block type "code", specify a component for it in the `components.types` prop
Using the line above, a multiple key index (category, createdAt) has been created, which will be sorted in the ascending order by these keys (-1 means the descending one). Every query with these parameters will be searched through the sorted elements in this request. If you modify the collection used by your index, MongoDB automatically updates this index too.
Scaling
App optimization is not everything. You can get to the point where improving performance just can’t be done without hardware adjustments. These days you no longer need to manually change the available RAM or processor memory - you can rely entirely on cloud solutions, which can help you use your resources much more efficiently.
Vertical and horizontal scaling
There are mainly two different ways of scaling: the vertical and horizontal one.
Vertical scaling boils down to adding more resources (CPU/RAM/disk) to your server, while horizontal scaling refers to adding more machines or containers to your pool of resources.
Horizontal scaling for Meteor projects typically includes running multiple instances of your app on a single server with multiple cores, or running multiple instances on multiple servers.
Autoscaling
Autoscaling is a tool that provides auto-spinning up or down your server capabilities based on the current usage. No more paying for the unused RAM, CPU, or containers.
The only condition you must meet is having a registered Galaxy Professional plan on the Meteor Cloud.
In the Meteor Cloud you can manually manage two parameters: the type of containers (vertical scaling) and the number of containers (horizontal scaling). To change one of these parameters, find your project dashboard. The changes made will be applied automatically in just a few seconds.
With this knowledge, you can take a look at your performance statistics charts and manually change the type and number of containers depending on the usage. But if you want your process to be more automated, such as being able to automatically change settings depending on the time of day, you need to use autoscaling triggers.
Autoscaling triggers are being executed after the fulfillment of specified conditions.
An example of such a condition:
If CPU usage falls below 10% after 11:00 PM every day, then you can decrease the number of containers from 6 to 2.
Final thoughts
I hope your app is a smashing hit on the market, while both its popularity and your income grow rapidly. Unfortunately, it’s also very likely for you to start to struggle with performance issues. Still, many would wish to swap places with you. The tips that you have found above can help you remove the sour aftertaste from this almost perfect story.
To sum up these tips in a nutshell, remember to connect the tools monitoring your application performance (APMs) - learn how to analyze and work with their advanced statistics, and focus on specific bottlenecks in your system.
The next on your list should be autoscaling. Setting up the triggers can help reduce your overhead.
After completing this checklist, your app performance should significantly improve, you have my word.
But if your case looks more complicated than the ones described above, click here to find out how we can help you with Meteor, and feel free to drop us a line.