It is hard to manage the capacity and performance of your application without falling into the traps of either starving your app or overspending on your cloud infrastructure.
Magalix scales containers and infrastructure based on predicted workloads to keep your applications within their performance goals with the least amount of resources. For example, If you want your web application's average API latency to be no more than 800 milliseconds, Magalix AI will optimize your containers and make sure that your application always meet that goal with least amount of resources (CPU, memory, and I/O). It will optimize your infrastructure as well in case you are using our enterprise version.
Our AI algorithms understand your application’s run-time architecture, analyze the impact of your workloads on resources needs, and start proactively scaling up or down your containers and infrastructure.
DevOps should not be overloaded with the old school virtual machines management. Conventional infrastructure should disappear. Containers and microservices should just run with the right amount of resources.
Our Containers as a Service (CaaS) model offers the peace of mind to both developers and engineering managers.
You don't have to worry about the unnecessary details of VMs and low-level operating system complexities. You should just plug your containers and get all building blocks of modern microservices based applications, such as service discovery, configurations management, health metrics, logs, ssh, etc.
Some call it serverless, others call it container as a service, we call it a liberating experience!
Focusing on needed capacity (cpu, memory, I/O, etc.) is tedious and has a lot of guesswork. Magalix AI will manage the resources allocation at the container level based on your criteria.
Our unique billing model of pay-when-active brings an extra peace of mind since you only pay when your container is actually having a CPU activity. You don't pay if your container is dormant and not using any CPU resources.
If you still prefer to have your containers to run on your VMs but still want the autopilot features, Magalix enterprise version is the right choice. Our AI models, in this case, will optimize both containers and underlying VMs to achieve the highest level of savings and performance for your applications.
Our AI models learn about your application by monitoring its vital signs, understand its dependencies to achieve two main goals:
- Provide the best scalability options to keep it to its performance goals, and
- Provide insights to improve application’s design and architecture.
Magalix gets smarter as it accumulates knowledge about best cloud-based software practices and matches these with what it learned about your application to achieve these two goals.
Magalix does not have access to your application's data or code. It works at the operating system level to read consumption of cpu, memory, and I/O.
Magalix learns about your application’s consumption of resources and where your cost is mostly going to. You can learn how much it costs you per customer interaction. Is your ROI getting better or not? Where should you focus your efforts to increase the efficiency of your code and customer experience?
Different Modes of Optimization
When Magalix optimizes your resources it can optimize based on of either resources usage or key performance indicators.
- Resources Optimization focuses on keeping your reservations of resources minimum based on usage patterns of your cpu, memory, and I/O.
- Key performance indicators (KPI) optimization focuses on keeping your application performant with least amount of resources based on the provided KPI. For example, keep a key API average latency below 500 milliseconds.
If you are new to Magalix, we recommend that you first read through the getting started section. You will run your first containerized app in seconds!
You can jump to Magalix Concepts section for a more in-depth description of application structure and support services. Our References section will provide you quick details about our tools, cli, and APIs
Our documentation is a work in progress. Any feedback and requests are welcome. If you feel like something is missing, please let us know how we can improve it.