We often describe data as having “the five V’s” — volume, velocity, variety, veracity, and value. For example, when moving data from a central server to a mobile application being consumed by thousands of concurrent users, the entire system serving the data could become strained and cause latency to end users.
It’s well-documented that data is growing at staggering rates. Yet consumer demands require companies to manage this growing volume of information, so it flows quickly, in as close to real-time as possible, to the end-user or edge. However, if these large organizations don’t consider the physics of software delivery, their development velocity, even if it starts as a flood, will end up a trickle further down the pipeline.
This slowdown happens because developers must push artifacts (the building blocks of software) through the delivery pipeline all the way to last-mile deployment. These artifacts aren't lightweight, either. In fact, compounded artifacts are quite massive. Every time an end-user accesses this data through an application (or if a development team updates it), said application downloads software artifacts from the pipeline. As is the case when you download files on your mobile device or home computer, larger files take longer to download, and a multitude of files in the pipeline slows the entire process down even further.
Unfortunately, rushing to move data quickly to endpoints and constantly pushing updates through a pipeline often results in high infrastructure costs and latency. Both are bad for business; higher costs directly affect the bottom line, and latency can negatively impact development time, user experience, and customer satisfaction.
According to IDC, 50 percent of all new infrastructure will be deployed at the edge of billions of new products launched within the next few years, which will likely exacerbate these issues. Edge computing processes data physically closer to the end destination and reduces the amount of data coming from the primary network, boosting speed, and decreasing latency in the process.
So, what can enterprises do to streamline their infrastructure and unclog software pipelines? To overcome the challenging physics of software delivery at scale, enterprises should take the following steps:
- Create a flexible distribution mechanism that is tightly integrated with the software lifecycle via DevOps processes. Using edges for software distribution, for example, gives businesses the flexibility to distribute software across various environments and remote development teams, which is increasingly vital in this era of distributed work.
- Utilize a dedicated, highly available network to speed up simultaneous downloads and, in turn, quicken the distribution of software. Today’s businesses are increasingly powered on hybrid infrastructures that span multiple regions, edges, and IoT devices, and they need app delivery processes and platforms to account for it all.
Best practices for overcoming distribution challenges
The good news for companies attempting to overcome software physics is that they don’t need to reinvent the wheel. Many organizations in the software supply chain have been working on the problem of distributed software for a while and developed best practices.
For example, the DevOps Institute offers a Continuous Delivery Playbook, which serves as a solid go-to primer on how to speed up DevOps processes. The Cloud Native Computing Foundation (CNCF) provides a snapshot of the cloud-native landscape, a comprehensive and interactive series of charts that organizes the industry's vendors and platforms based on service type (database, key management, observability and analysis, and so on). And IDC has created an infographic from its volumes of research about how to accelerate trusted distribution of innovation everywhere, detailing the benefits of robust software distribution capabilities as they relate to successful digital transformation.
Regardless of which resource you use, remember you should follow a few tenets of trusted software distribution. Pavan Belagatti, a DevOps expert, believes a trusted distribution mechanism is comprised of the following:
- Speed: Using the processes discussed above, developers must be able to distribute pieces of software as quickly as possible to speed up development and reduce downtime for end-users.
- Security: Security breaches can imperil the software supply chain at every turn. Ensuring security measures are baked in from the get-go — automating common security tasks including promotion and build acceptance, for example — is crucial to keeping distributed software from prying eyes.
- Reach: Companies should be able to distribute their software anywhere in the world if need be. Doing so effectively often involves leveraging data centers or cloud infrastructure zones and regions in locations with high concentrations of customers and end-users.
- Scale: Scale here refers to managing and maintaining the performance of the delivery pipeline. This includes setting up a network for multi-site replication, using processes and tools that ensure high availability, and scaling storage needs as the organization grows.
- Simplicity: Automate what you can and simplify as much as possible. Gartner calls this concept “hyper-automation,” which in this case could include automatically triggering software distribution as a part of the DevOps process.
Address the physics of software and reap the benefits
Companies today compete on the customer experience. Organizations that can deliver products and services to customers quickly, seamlessly, and without downtime will emerge ahead of their competition. For this to happen, companies must understand the requirements for modern software distribution; they must learn the physics of software delivery. By identifying bottlenecks and building a flexible and trusted distribution mechanism, companies can overcome the challenges of physics and reap the benefits of distributed software.
Written by Sagi Dudai, EVP Product & Engineering at JFrog