The 3 Levels of Decision Automation

Darrell Leong, Ph.D.
6 min readApr 13, 2021


We make a multitude of decisions every day. These range from the inconsequential selections of which sneakers to wear, what to have for lunch, to the occasional life-altering choices like whether or not to take on a mortgage, to make a career switch or to migrate to a new country. No individual is an expert on every field, and as such we make the most informed decisions we can with whatever knowledge we have, and information we can get our hands on. Unfortunately, it is commonplace that expertise on decisions that carry high consequences are limited, and only a minority of individuals can afford access, with examples ranging from legal advice, financial planning, to medical opinion. Thankfully, with the arrival of the Information Age, knowledge lakes once accessible only to the privileged few are now made available to any one with a smartphone and internet connection. Spending an hour Googling vacation information would allow any computer-savvy individual to bypass costly agency fees with travel bookings.

However, before we replace our family physicians with WebMD, it is important to realise that not all decisions are easily made with only raw information. Complexities within underlying processes along with local sensitivities of each individual’s circumstances, necessitates inference in decision making. Inference that can only be performed by select persons with adequate domain knowledge. Herein lies a gap win applications where the availability of human expertise cannot keep up in markets of growing demand for quality advice. Under such circumstances, it becomes commercially viable to implement smart decision automation strategies to effectively scale up the reach of human experts.

In this article we explore three broad approaches to decision automation, in ascending order of complexity:

  1. Experiential Reasoning
  2. Objective Oriented Reasoning
  3. Risk Informed Optimisation

Experiential Reasoning

If it is the wet season, cultivate grains. Otherwise, cultivate potatoes.

An expert system that adopts this approach can be thought of as a direct automation of the human expert. The decision model itself takes one of two forms: a rule-based engine or a data-driven model.

In the rule-based engine, the human expert scripts his/her advice principles in a form of a programme. The users input their circumstances into the model, the engine feeds them through its programmed advisory process, before concluding with a recommendation. The structure of the advisory rules varies across applications, typically representable in a form of a flowchart or scoring matrix. Heuristic components of the advisory process are required to be objectively expressed. Construction of this engine is context-specific, usually requiring an extensive automation exercise from one use case to another.

Data-driven techniques in defining a decision model are alternatives to the rules-based approach. In place of codifying advice principles, the model is trained from historical consultations with the human expert. The advantage over the rule-based engine is that model building is context-agnostic. However, model complexity scales together with that of its counterpart advice principles, possibly requiring deep-learning models to achieve decent performance. As such, the availability of consultation data would determine the feasibility of this approach.

In either case, the decision making process is reduced to a blackbox from the user perspective, with the entire process seemingly independent of the decision outcome. Here an implicit underlying assumption is made by users, that the output recommendation is optimal. As such, the quality of the automated recommendation system can only at best perform as well as its human counterpart.

Objective Oriented Reasoning

Cultivate grains during the wet season because it will maximise yield at lower irrigation costs.

From here we dwell into the realms of normative decision theory, which analyses the outcomes of decisions or determines the optimal decisions given constraints and assumptions. Hence in objective oriented reasoning, the goal shifts from trying to emulate the human expert to optimising the underlying objective outcome. Here, problem formulation defines the nature of the outcome, which is an objective success measure of the decision.

Similar to the decision model, the process model itself takes one of a multitude of forms, ranging from empirical models from validated scientific research to data driven models. Point estimate predictors deliver the expected outcome of a given decision, while probabilistic predictors provide uncertainty measures around the expected outcome for a given decision.

With the ability to predict outcomes, an optimisation strategy can be introduced to iterate combinations of decision parameters with the goal of achieving the best possible outcome within feasibility constraints.

Success of this decision system depends on the predictive performance of the process model. For machine learning solutions, data requirements shift from expert consultation to process histories. Despite independence from human recommendations, the outcome oriented rationality presents an opportunity to outperform human advice, and avoids issues with conflicting advice principles. Furthermore, a feedback loop can be established with live user data, that can improve the predictive performance of the process model over time.

Risk Informed Optimisation

Rainfall over the past seasons have been fluctuating, uncertainty of grain yield is more than what the farmer tolerates. The farmer should grow yield-stable potatoes instead at a cost of lower expected yield.

Objective outcomes have two layers of uncertainties: process and circumstantial uncertainties. Process uncertainties describes the variability of outcome from one realisation to the next even if all decisions and circumstances are kept constant. These are residual variances the process model is unable to explain. The use of a point estimate predictor assumes that these residuals are negligible and can be ignored, while probabilistic predictors provide some estimate of the uncertainty. Circumstantial uncertainties represent the variability of outcome due to variability of some of the circumstantial inputs. Common examples of such risk exposures are environmental variables that influence the objective outcome.

Statistical descriptions of circumstantial variables and their pairwise correlations define the risk factor exposures of the outcome. Based on this uncertainty model, simulation strategies can be incorporated to perform outcome predictions across multiple conceivable scenarios, in order to provide an estimate on the distribution of outcomes. If probabilistic process models are used, statistical cumulation strategies can be applied to combine both circumstantial and process uncertainties into a global distribution for a given set of decision inputs.

Now that each set of decision inputs delivers a distribution instead of a point estimate of the expected outcome, it is necessary to establish a decision criterion to objectively evaluate one decision against another. Simple risk adjustment measures involve calculating the ratios of average values to standard deviation, such as Sharpe and Sortino Ratios applied in financial portfolio optimisations. More advanced solutions derived from behavioural economics consider the user’s risk preferences, defining an optimal decision set as one that maximises the individual’s utility.

Regardless, the optimiser searches the parameter space for the set of decision arguments that optimises the criterion defined, delivering the most rational course of action considering all risk exposures.


With an increasing connected world, decision support systems in place of human advisory services would likely become a common theme among digital transformation efforts across industries. This article presents three conceptual approaches, each delivering recommendations with varying depths of rationality. In order to determine which is appropriate, we need to consider the following:

  • Complexity of the advisory principles
  • Complexity of process to be optimised
  • Availability and nature of historical/operational data
  • Scaleability of rule-based solutions
  • Availability and applicability of empirical process models
  • Importance of risk factor considerations

While a conceptual overview of these approaches aim to establish an abstraction template for planning decision automation projects, it is not unusual in practice adopt hybrid concepts to accommodate computational and time-to-market constraints. As such, the above list of considerations is non-exhaustive. Perhaps a decision system necessary to prescribe which approach/combination is appropriate for our projects.

Future Scopes

  1. Intertemporal choices
  2. Hybrid concepts



Darrell Leong, Ph.D.

Decision scientist developing advisory engines across a variety of applications.