Looking Through the Veil: An Autonomous Approach to Remote Assessment

Darrell Leong, Ph.D.
9 min readJul 2, 2021

--

You’re five. It’s time for bed after a late night of horror flicks. Through the pitch darkness in your room you hear the faintest creak against the wooden floor boards. Startled you immediately throw the sheets over your head, cowering in fear.

Was it just your imagination?
Is it just the house setting in?

Your child-like imagination runs wild.

Or is the bogeyman standing at the foot of your bed right now?

You know if you toss the sheets aside to look, you’d instantly find out. But why wouldn’t you? The sheets wouldn’t do much to improve your odds, why not obtain informational advantage by finding out if the monster is really there or not? But the immense fear of the former drives your decision away from the need to know, and you’d rather settle with “probably not”.

You continue cowering under the covers barely catching any sleep.

Coping with Obscurity

If you’re reading this you’d probably emerged from the covers many years ago, but yet you’re still no stranger to the veiled unknowns. You don’t know if it’s sunny or drizzling til you open the curtains, if the line at the bank is long before arriving to close an account, or if the coffee at the cafe down the road is any good until you’ve tried it. Decisions have to be made even with incomplete information. The uncertainty of an unknown prior to discovery is perhaps best described by the Schrödinger’s cat paradox, which illustrates a quantum superposition of states. This thought experiment describes a sealed box where a cat can either be dead or alive, and as long as the fate of the cat remains an unknown, it exists simultaneously as a distribution of both dead and alive. The instant the box is opened, the superposition collapses into one of the two states. While discussions that follow this illustration usually dwell into the strange realm of wave-particle duality, let’s take a step in another direction and consider how this classical phenomenon interacts with our daily lives.

“She probably had her lunch by now.”

“It’s been snowing all morning, the driveway is probably buried.”

Whenever we attempt to make inferences about information unavailable to us, we inadvertently assign heuristic probabilities to each possibility we can imagine. By acknowledging an unknown, we do not commit certainty to a specific answer, but divide our belief among a range of conceivable states [1]. In this regard, right up to the point of discovery, the superposition of states illustrated by Schrödinger’s cat emulates the way our minds process the unknown.

Perceiving the Unseen

Apollo 13’s reentry sequence. Credit: Safely to Earth

“Houston, we’ve had a problem.”

Late one evening in April, 1970, three hundred thousand kilometres above the ground an oxygen tank exploded, blowing out an entire panel from Apollo 13’s command service module en route to the moon. The mission abort from Houston eventually led the spacecraft back to atmospheric re-entry with the lunar module still attached. Engineers on the ground scrambled to relay the right amount of pressurisation for the astronauts to de-couple the module. If the pressure is too high, the craft would be damaged, causing the crew to burn up as they enter the atmosphere. If the pressure is too low, the lunar module would not be pushed far enough away, risking collision with the craft at re-entry and incinerating the crew. Thankfully, as the crew whizzed towards solid ground at a whooping 40,000 kilometre per hour, model representations of the command and lunar modules on the ground allowed engineers to quickly obtain estimates, and stress-test different pressurisations before relaying instructions at record time.

Such models exist anywhere between scaled physical copies to virtual models that are efficient enough to simulate reality in real time. Today, the latter is commonly referenced as digital twins. They aim to represent physical objects or systems, mirroring key attributes that are not directly observable to decision makers. The technology behind digital twins has since expanded to include large items such as buildings, factories and even cities, and possibly even human behaviour.

While conventional adoptions in the commercial markets involve factories monitoring their processes with (sometimes thousands of) IoT sensors to remotely assess system health in real time, there exists a class of latent variables that are not measurable even by the most high-tech devices. For example, a person’s emotion behind a voice message, or consensus sentiment across financial markets are important decision variables that are truly hidden. We know that some appropriate assignments of these variables exists, but we do not know how to attain them. We have only indications of the underlying processes, such as vocal sound waves to infer emotion and social media content to infer sentiment. Even if we use reasonable heuristics to make guesses, our belief of the subject’s true nature would not be one of exactness, but rather a superposition of possibilities across a distribution.

However, knowing such a distribution is already an important informational advantage. If we know how likely the cat is still alive in Schrödinger’s experiment, we’d be able to make risk-informed decisions without looking into the box, such as whether to buy more cat food or a shovel. After all, assuming quantum superposition to be true, the probabilistic superposition of the cat’s fate between dead and alive represents the very nature of what’s inside the box.

But how do we formulate beliefs that align with the true distribution of such hidden states? Applying human intuition and heuristics introduces bias to our guesstimates, and reading millions of Tweets to predict market sentiment on every stock is by no means practical.

Objective Inference Through Uncertainty

To a certain extent, the concept behind digital twinning is one of biomimicry, modelled after human imagination. Based on our past experiences we construct neural connections which enable us to simulate some level of approximations on reality. Whenever we make any inference about a subject that is not immediately observable we imagine likely scenarios by running little simulations, hoping they are precise enough to draw temporary conclusions without the need to open the box.

“I’ve always seen her eating at noon. She probably had her lunch by now.”

“I’ve shovelled enough to know snow don’t melt that quick. The driveway is probably still buried.“

However, relying on these mental assessments open the floodgates to many vulnerabilities. For one, the quality of inference suffers human limitations in incorporating new information adequately enough, while updating our representations of reality rapidly enough. Second, overconfidence causes us to underestimate uncertainty in our beliefs resulting in risky bets made with an illusion of certitude. Third, cognitive biases disproportionately over-weigh experiences that come easily to mind. Lastly, human emotions further skew our beliefs from the truth, such that we over-estimate probabilities of outcomes we fear the most. Thankfully, there are ways to automate this imaginative process objectively.

Hidden Markov Models (HMMs), similar to artificial neural networks, assume a structure that approximates hidden realities. But instead of modelling the complex mechanics of a physical system, it directly simulates the evolution of its unobservable latent attributes based on observable emissions [2].

The structure of any HMM can be fully defined with the following parameters:

  • Probabilities of hidden state at the start
  • Transition probabilities from one hidden state to another
  • Emission probabilities determine how likely each observation would happen, given the hidden state
HMM inferring the weather based on user activity. Credit: Terencehonles

With a fully trained model, we can perform fully automated, unbiased, statistical inferencing using the Viterbi algorithm. With a sequence of observations (e.g. walk, clean, walk, shop) as input we can obtain:

  • The most likely sequence of hidden states (e.g. sunny, sunny, rain, sunny)
  • The distribution of hidden states currently (e.g. 30% rain, 70% sunny)
  • The distribution of hidden states in a past point

While the model structure is less complex compared to most deep learning models, training its parameters from data in a machine learning sense is far from trivial. This is because only observable data from the emissions would be available to derive all its parameters!

The advantage of statistical models such as HMMs or the Gaussian distribution, over conventional machine learning models, is that there are multiple ways to train them. Maximum Likelihood Estimation has a considerable theoretical advantage of directly maximising the likelihood of the model parameters being true [3]. Expectation Maximisation is a special case of Maximum Likelihood, which trains statistical models that rely on unobservable latent variables. The algorithm adjusts the model probabilities using only the observed data, such that the resulting HMM will always be of a greater likelihood compared to its initial guess [4, 5].

Training is in-built with an updating system where the parameters are adjusted recursively with new observations [6]. This enables the practical advantage of being able to deploy an initial model even in an absence of data, where the starting HMM can be aligned with initial beliefs of the initiation, transition, and emission probabilities. As real-time observations are made over the course of its operation, the HMM concurrently updates itself towards its true parameters while making inferences based on observations collected so far.

Digital Twinning for Smallholder Farmers

The motivation for a decision support ecosystem is to deliver knowledge from experts to users in a scaleable way. In farming this is akin to dedicating agronomists and agriculturists to every farm, so that constant oversight on the crops enables quality recommendations. In reality this is seldom practical, and it is more common to deliver recommendations remotely from a centralised source of expertise. This however hinders oversight on actual health states of production components (crops, soil, farm, etc.), and experts would rely on the limited information they can gather to form the basis of their recommendations.

For large-scale commercial producers, such information can be tracked in real time using networks of IoT devices. Smallholder farmers, on the other hand, do not possess the resources required to support IoT-style monitoring. Furthermore, these farmers may experience vastly different crop performance for a variety of reasons, such as varying cultivation techniques, levels of care, luck, and other externalities. This lack of visibility of real time crop conditions complicates delivery of personalised advice, as well as early warning pest and disease alerts that could prevent loss of livelihood.

With hidden state inferencing models such as HMM, it is possible to rationally formulate beliefs about crop attributes, by aggregating app-user interactions, observations and weather data that correlate with their hidden states. The approach is computationally efficient and repeatable, and can be extended to all attributes that needs to be monitored without developing complex physical models. Collectively, the ensemble of inferred attributes would complete a holistic virtual clone of each farmer’s crop, enabling advisory systems (human and automated) to provide personalised, relevant decision support to individual farmers.

Unveiling the Big Picture

An ecosystem for automated remote sensing and advisory

Formulating a robust decision-making framework on indirectly observed assets demands a comprehensive ecosystem. At its base, reliable sources of data feeds must be established to obtain reasonable indications of the assets’ latent attributes. This requires monitoring resources, such as networks of IoT sensors, or adequate levels of user engagement for consistent and reliable streams of interaction. Next, an inference system is needed to ensemble multiple sources of data, to interpret important latent attributes in real-time. This system should be fully automated such that it delivers an unbiased representation of the asset. The breath of information inferred about the asset should also be wide enough to support meaningful decision-making from the digital twin. At the very top, the decision support systems should be defined by industry-leading expertise, and designed in a scaleable fashion to enable widespread delivery.

While informational efficiency in the monitoring layer is exceptionally challenging, we can still rationally incorporate all the data we have on hand to formulate our beliefs. Perfect representations of absolute exactness might never be possible. But with the right tools we can perhaps be certain enough about what lies beyond the veil, never to lose another wink of sleep again.

Resources

  1. Decision Making with Incomplete Information
  2. Hidden Markov Models
  3. Maximum Likelihood Estimation
  4. Expectation Maximisation
  5. Baum-Welch Algorithm
  6. Bayesian Inference

--

--

Darrell Leong, Ph.D.

Decision scientist developing advisory engines across a variety of applications.