The allure of software is fading, and we expect too much from current methods. As a result, software engineers are often unaware that they are losing the battle against complexity. Small failures add up on top of other little losses, making life more frustrating for both consumers and businesses.

Apple’s goods, for example, have gotten buggy, travel remains a pain, and call center experiences make us question both artificial and human intelligence.

To restore the magic of software, developers must cease leading systems through each step to secure the desired results. Instead, as systems become more voluminous and sophisticated by the minute, developers will need to leverage layers, intent-oriented algorithms, and artificial intelligence to make the software more autonomous (AI).

When we take a step back, it’s not surprising that part of the magic has faded. We’ve elevated our expectations for software while also expanding our definition of who should be allowed to control it. More and more is expected to “just work” automatically, and we hope to influence the automation of our digital lives and jobs.

It’s difficult to achieve these expectations when the software’s flaws remain static. However, this automation is anticipated to solve real-time requirements, such as those in which the parameters often change while the automation is running.

In the face of traffic, weather, and construction, getting from point A to point B in a car is difficult enough. But what about optimizing for passengers’ phone calls throughout the ride, as well as real and digital commerce? How about doing it for millions of automobiles on the same road simultaneously? How about combining vehicles, trains, airlines, hotels, restaurants, and other modes of transportation?

These considerations are beginning to necessitate a new programming model: declarative programming. In this model, we express an intent — a desired objective or end state — and the software systems determine how to “make it so” on their own. Humans define the boundaries and constraints, but expecting humans always to figure out how to get there is unrealistic. As a result, computers take over and complete the task.

An illuminating analogy is well understood in the business world: management by objectives (MBO). Employees are taught the targets they’ll be judged against vs. how precisely to attain them in a strong MBO strategy. The goals could be based on sales figures, customer interaction, or product uptake. The personnel must then determine the best route to take to get there. It will often necessitate adapting as circumstances change unexpectedly and learning as you go, so it becomes easier over time. In a sense, this alternative programming paradigm is like MBO for machines in that it manages software by objectives.

There are numerous examples of this necessity. Bots, or interfaces that receive voice or text commands, are one of the hottest topics right now. While today’s bots are often command-oriented (e.g., find Jane Doe on LinkedIn), they will need to evolve to become intent-oriented in the future (e.g., find me a great job candidate).

Consider the situation where you need to hire a new salesperson, engineer, or CIO. Instead of sitting at your computer and scouring the internet for talent, you communicate with an intelligent chatbot that does all of the legwork for you. The chatbot is connected to an API that pulls applicants from LinkedIn and Glassdoor, improves their information using GitHub and Meetup, then contacts them to measure interest and fit. Once you’ve found a suitable applicant, the chatbot connects you two to get things started. The chatbot learns which applicants work out over time and improves its sourcing skills. While this hiring process may appear futuristic, it is currently doable with the proper orchestration of existing software.

  • We can learn about how software can tackle difficult scenarios at scale by looking at how our built-in computers (our brains) analyze images:
  • Light is absorbed by photoreceptor cells in the first layer, which does minimum processing and sends signals to the second and third layers of the retina.
  • In the second and third layers, neurons and ganglion cells collaborate to detect edges or shadows and communicate their results to the brain via the optic nerve.
  • There are more layers in the visual cortex: one calculates objects in space; another detects and analyses edges to piece together forms. A third layer turns these shapes into recognized things like faces and objects. Each layer gains knowledge and improves its performance over time.
  • Finally, the final layer compares these faces or items to the person’s stored memory bank, determining whether or not they recognize the persons or objects.

This technique, in which each layer is accountable for only one goal and the goals become more intricate as the abstraction levels rise, allows the software to be intent-based and address complex scenarios at scale. APIs freeing up crucial data, composite services managing data from different systems, and artificial intelligence making smart decisions at every layer are the layers in the machine world.

This is the future of software, and it’s already begun in modern, massively distributed cloud computing systems like Google’s Kubernetes and its rich ecosystem; autonomous vehicles, both terrestrial and aerial; and, of course, artificial intelligence and machine learning, which are permeating every layer of our increasingly digital world.

A paradigm change is unavoidable because of the increasing complexity brought on by an explosion of co-dependent systems, dynamic data, and increased expectations. The new programming model will allow humans to do what they do best: choreographing results by enabling computers to tackle their difficulties of complexity.