managemnet company strategy managemanet The Strategic Benefits of Randomized Decision-Making

The Strategic Benefits of Randomized Decision-Making

The Strategic Benefits of Randomized Decision-Making post thumbnail image

The Naskapi, a nomadic people indigenous to Quebec and Labrador, hunted most of their food. You might expect, then, that much analyzing and strategizing went into the crucial decision of where to hunt: The Naskapi might have recorded how many moose or caribou they hunted to ensure they did not overexploit their hunting grounds; they might have made systematic plans to regularly explore new regions to discover new herds; or they might have tried to predict the likelihood of finding particular herds in different landscapes, like valleys, hills, or along rivers.

Instead, the tribe, along with many other ancient peoples, relied on divination. In the case of the Naskapi, this consisted of heating up the shoulder blade of a dead animal to the point where it cracked. They started their hunt in the direction to which the crack pointed.

This ritual will strike most people as superstitious and arbitrary, effectively making their strategic decision random. But that was precisely the point. The randomness of the process enabled the Naskapi to tackle the complex problem of choosing where to hunt quickly, without bias and without becoming predictable to their prey. As a result they avoided spending too much time and effort in the search for the ideal hunting ground and survived in the hostile, sub-arctic for hundreds of years.

This seemingly implausible link between the Naskapis’ divination practices and their ability to thrive in a harsh environment can be extended into a business context. We’ll look at how people are currently leveraging randomization to make operational decisions and discuss how applying this approach to strategy might enable businesses to thrive much as the Naskapi did. We’ll conclude by offering tips on how companies can introduce randomness into their strategic decision-making process.

Randomization Today

Randomized decision making has proved successful in modern times as well — most notably in operations management. During the Second World War, the Allied powers faced tough challenges like figuring out where to find enemy submarines in vast open waters, based on the positions of last sightings. The problem was big because of the very large number of possible paths the vessels might take and because there were not enough ships or planes in one’s own fleet to conduct a systematic, exhaustive search before an enemy submarine struck again. In these situations, research showed that quickly choosing, at random, a few places to search and then expanding the radius using random changes in direction beats systematic searches in which every step was pre-planned in advance.

Around the same time, researchers working on the Manhattan Project invented Monte Carlo simulation, a method that relies on random sampling to estimate the outcome of a complex system or process, in which the environmental conditions themselves exhibit randomness. The scientists used this method to predict the performance of different bomb designs and shielding materials. Monte Carlo simulation continues to be used widely for making decisions under uncertain conditions, notably in finance and logistics.

Today, randomization is often employed to enhance the efficiency and accuracy of machine learning techniques. For example, the initial parameters of a neural network are often chosen at random to avoid the network getting trapped in a particular configuration. Broadening the search space increases the chance of finding superior configurations.

More prosaically, package-delivery companies already leverage stochastic optimization techniques to identify robust routes across a range of different scenarios (e.g., with factors such as shipping volume, time windows, vehicle availability, and road closures varying from day to day). There is also some evidence that designers of large-scale products and social systems develop an intuition for when to choose randomly and when to optimize, based on an estimate of the problem’s complexity.

What About Strategy?

In contrast to operations management, strategy-making has remained a stubbornly deterministic practice, which generally tries to, first, fully understand a problem and then deploy analysis to create a plan for tackling it. The Big Data revolution has encouraged this approach, leading us to believe that everything can be known and analyzed to create reliable strategies.

But this belief ignores the fact the very same piece of data can be used to argue for different, even opposing, courses of action. For example, Kodak’s response to observing a rising demand for digital photography was to double down on the traditional film business to ensure it would not cannibalize its sales. Meanwhile, competitors, such as Sony and Canon, reacted to the same information by investing heavily in R&D to improve digital camera technology.

Confidence in big data solutions also blinds people to the computational challenges. Organizational activities are often tightly coupled and interact densely — both with one another and with forces operating beyond the firm’s boundaries. As a result, a very large number of causal paths, factors, and networks come together to determine any outcome.

Even apparently “simple” problems at the foundation of making strategy work, such as whom do you choose to spearhead a problem-solving process are, in fact, large. That is in part because there are many different kinds of authority different people can have — such as initiating a decision, awarding rewards, or sharing information — and, accordingly, many ways of assigning different levels of authority.

Finally, given the pace of change and volatility of today’s world, it is becoming infeasible to collect and analyze all information necessary for a deterministic approach to problem-solving in real time — doubly so as behavioral patterns of consumers and competitors keep changing in response to any moves made. The idea that you can always find algorithms that will reliably predict the outcomes of strategic decisions in the real complex world of business or politics will belong in the realm of science fiction for the foreseeable future.

In this context, strategy can no longer be about determining the single best course of action. Rather, the goal of strategy must shift from making plans to building a portfolio of options, each of which could form the basis for future success. Building this optionality, however, is inefficient in the traditional strategy-making approach, as the search for multiple possible solutions can be prohibitively costly and time consuming. Given these factors, perhaps it’s time for strategists to take a leaf out of the Naskapi playbook and embrace their intelligent randomization in the face of complex problems in large, unknown, and changing landscapes. Let’s turn to look at what benefits that might deliver.

Early Advantage

Consider the company Odeo, which had built an online podcasting platform that, in 2005, found itself facing insurmountable odds when Apple announced it would ship its own competing platform as part of iTunes with every one of its iPods. Company leaders at Odeo recognized that a radical shift was required and started holding daylong “brainstorming” sessions — a collective random search for new directions.

One idea that originated in these sessions was an online platform for sharing your status with friends and followers that, before its recent name change, was known as Twitter.

Given the situation Odeo found itself in, there was limited time for developing a strategic roadmap, let alone for conducting any meaningful amount of market research or competitor analysis in what was already a crowded space, with Facebook and MySpace each attracting tens of millions of active monthly users at the time. Instead, Twitter was launched as a minimum viable product (MVP), creating an option for Odeo to survive and, by continually iterating on the platform, thrive. (Ironically, Odeo‘s investors did not realize Twitter‘s potential and allowed the firm’s management to buy back its stock for approximately $5 million — around 0.01% of what Elon Musk paid for the platform in 2022.)

A key lesson of the Twitter example is that, for many problems, solutions are only meaningful if they are implemented quickly enough for them to matter — the same benefit that the Naskapai ritual conferred.  Had the Naskapi comprehensively analyzed their way to a solution, the herds they were looking for would probably have moved on.

Like Twitter, many of the best-known entrepreneurial ventures have benefited from the early-mover advantage that randomized decision-making and quick action bring. The first GoPro consisted of a just wristband and plastic casing housing a cheap off-the-shelf digital camera. Building customer relationships and iterating on this first design enabled the company to stand its ground when several heavyweights, like Sony, Nikon, and Garmin, later entered the market for action cameras.

Randomization works for incumbents just as well — arguably even more readily, because they have more resources to leverage. As the online shopping market began to move down different paths simultaneously (towards a search-based model like Google Shopping, a multi-shop model like Tmall, and a general store model like Amazon), Alibaba did not wait until it could reliably predict the winning model. Rather, it split its business and developed solutions for all three future scenarios, emerging more powerful than ever as it turned out that all of the new market segments were here to stay.

Faster Learning

Getting started sooner also means you learn faster. For example, launching an MVP early generates information by sparking competitor and customer reactions, which inform your next move. For the Naskapi, following the divined direction in their hunts meant that, just by chance, they would regularly make new discoveries, such as water sources, places for temporary settlements, or potential hunting grounds.

For a more contemporary example, consider large language models like ChatGPT, which work by predicting the probable next word based on the previous ones. Programmers can control the degree of accuracy of this process through a setting called “temperature” — the higher it is, the less likely the algorithm is to select the word that is predicted to have the best fit with what came before. Increasing the temperature reduces accuracy but boosts creative surprise, which may be desired by the user and also has the benefit of creating more variation in output and, in turn, user reactions, which allows the model to improve over time.

Of course, businesses know about the value of experimentation. Physical retail stores have experimented with shelf placements for decades. In the digital context, firms routinely A/B test to optimize website designs, product recommendations, or pricing models.

But the scale and speed of testing and experimentation have generally been undervalued, partly because the tests are often designed to prove or disprove a precise hypothesis, which itself has been predicated on a largely stable environment. As a result, strategists learn less than they could if they would if they had turned up the temperature and conducted less precise, more varied, and more frequent testing. Random testing is often used to supplement to hypothesis-driven testing in software development, for example.

Less Predictability

Employing a random strategy — selecting between all available moves with equal probability — is the only optimal strategy in a (repeated) game of rock, paper, scissors, because it is the only strategy that does not allow some dominant counter-strategy to emerge. In a more complex context, for example in chess, there are several famous examples of seemingly random (or at least counterintuitive) moves being made by a player that served to introduce complexity and stress into a game in which they faced a superior opponent.

This benefit has long been recognized by some financial institutions, which employ randomness to obfuscate their trading strategies. By leveraging “scentless algorithms,” which introduce random delays and variations in the timing and size of orders, institutions can avoid signaling their intentions, which could be exploited by other market participants to register gains on the back of more competent traders’ analyses. A simpler example is “fake door testing” where random product or promotional configurations are presented to consumers on line to learn by eliciting a reaction, while at the same time giving few clues to competitors.

Reduced Biases

Managers often tend to replicate past successful approaches, while being less receptive to new ideas or external signals. This can lead to decline as the environment shifts around them. Well-known examples abound: Blockbuster and Nokia deferred to the “tried and true” with disastrous consequences when demand and competitive conditions changed radically.

Embracing randomness can provide a solution to this problem. In nature, evolution — random mutations, combined with natural selection pressures — ensure continued adaptation of a species to a changing environment. Learning from biological phenomena, designers of evolutionary algorithms quickly generate random bits and pieces of a solution to a big problem, which are then assembled into trial solutions that get evaluated against an objective or fitness function.

In the political context, humans have been leveraging randomization for millennia. In ancient Athens, for example, a lottery system was used to select magistrates, to ensure that the rich and powerful did not buy their way into power. In the business context, organizations could easily adopt randomizing behavior to cut through the maze of political maneuvering and negotiations that accompany budget allocations to one or more competing alternatives, each with its own champion and leveraging private data. Because of the impartiality of randomizing over possible options, no one need feel unfairly excluded.

Not all biases against change are the product of prior success. In highly negative contexts, humans fall prey to learned helplessness, in which repeated experiences condition a person to believe that they have no power to change their circumstances, and, therefore, they do not even try to take decisions. Adopting a randomized process to decision-making in such situations may counter this bias to inaction.

How to Introduce Randomness into Strategy-Making

Our standard mental image of randomness is a coin toss or a dice roll. But for real-life problems, using these techniques would require knowing, enumerating, and evaluating all of your options first — which is almost the same analyzing your way to an optimal strategy. Still, there are various tactics strategists can leverage to incorporate randomness into their toolboxes and identify a probably good-enough solution in good time for it to matter.

Vary the starting point.

Like machine-learning algorithms, use a random prompt to vary the starting point of your search. For example, to help musicians unlock their creativity, Brian Eno and Peter Schmidt have created a deck of cards with instructions that encourage lateral thinking, such as “change instrument roles,” “emphasize the flaws,” or “put in earplugs.” Introducing an element of chance into the creative process helps overcome creative blocks and ensures artists do not fall back on familiar habits and patterns.

Vary the pacing.

Vary the tempo or rhythm of the search (length of feedback loops). Too often, teams and groups are trapped by a “metronome” of quarterly reports, monthly reviews, and weekly meetings. But, to randomize intelligently, you may have to operate on time scales of days or even hours.

Vary the locus.

Do you search for a solution close to where you are now or far away? Random jumps can help unfreeze your strategic decision making by taking you to parts of the search space you had not previously considered. For example, research has shown that in the open ocean, where prey is scarce, fish use a search pattern that incorporates occasional, but extremely long “step” lengths between search areas, following a mathematical pattern known as Lévy flights.

Vary the heuristics.

Do you search top down or bottom up? Depth first or breadth first? Not all big problems will yield to the same search strategy. Some work better than others, and there is no “one best way” to solve any problem whatsoever.

Vary the searcher.

Make different people responsible for the search or for different aspects of it. Since different people have different built-in biases, prejudices, and go-to methods, randomizing across different problem solvers will also help you randomize across possible solutions.

. . .

In the past, companies aspired to be “know-it-all” organizations that could understand and control their environments to such an extent that they were able to clearly plot an optimal path through it. Some companies, upon recognizing that know it all was no longer possible, pivoted towards a “learn-it-all” mindset, constantly fine-tuning their approach as new information emerged.

While this continues to be powerful, we believe it can be further extended to create a “search-it-all” approach, in which an emphasis is placed on actively probing the environment to generate the valuable information that enables developing optionality quickly and efficiently. And quick, random choices help you accomplish just that. Which brings us back to the Naskapi, whose superstitious ritual looks increasingly like smart decision-making in the face of complex, ambiguous challenges.

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Post