Search

Augmenting Financial Intelligence & Explainable AI

Semantic Brain was founded with the intention of augmenting Human Intelligence, by harnessing the power of AI and Big Data. Our primary goal is to increase human productivity as it relates to performing intelligent tasks, by an order of magnitude; using enhanced AI / Machine to Human feedback loop(s). This is contrary to many current state of the art AI solutions that provide no or limited explainability / transparency, thus limiting the ability to create effective AI to Human feedback loops.


We recently achieved 100% explainability in our Price Change Direction prediction, and were able to unequivocally demonstrate that explainability in a Human + AI system can deliver Super Intelligence. All human stakeholders / personas needs were met. Some key evidence relating to these findings are presented below.


Explainability

In this context explainability refers to users being able to understand how inputs (i.e. features) relate to outputs (i.e. predictions). Features can be

  1. Primary: Obtained directly from stock markets (e.g., price, volume) or financial statements (e.g. revenue, earnings).

  2. Derived: Calculated based on one or more primary features (e.g. Relative Strength Indicator, Moving Average Convergence Divergence , Volume Weighted Moving Average)

  3. Composite: Containing more than one features. (primary and/or derived)

The relevant stakeholders were identified as

  1. Financial AI Engineers: People who create and/or update the predictive AI models. These individuals also engineer features.

  2. Investors / Traders: People who use the predictions to make investment / trading decisions

Note: Feature names have been anonymized and some labels have been obfuscated to protect Intellectual Property. Colour coding will be updated to match market colors in future.


Financial AI Engineers

Engineers working on Predictive Analytics can determine which features influence price the most (as they relate to the overall model). By observing chart below (Fig 1) engineers can easily determine f1, f3 and f2 have the greatest impact and f8, f7 and f11 have the least. An engineer may be able to improve the model by removing f8, f7 and f11 and engineering features similar to f1, f3 and f2.

Fig 1: Feature Importance

Financial-AI Engineers and Investors / Traders

Fig 2 below is an input / feature distribution chart, and identifies what features individually impact predictability (as opposed Fig 1 which determines impact on collective model). Investors / traders may also use this chart to create their alternative analysis templates (e.g., identify features that are not covered and how they may impact direction of price movement).


Fig 2: Feature Distribution Chart

Investors / Traders

As all features are suggesting upward price movement investors can have high confidence in this prediction


Fig 3: Confident price increase prediction

As all features are suggesting downward price movement investors can have high confidence price will move down.

Fig 4: Confident price decrease prediction

Although price in general is expected to move up, some features suggest downward price move. Thus investor should have less confidence in this prediction or alternatively perform additional due diligence.

Fig 5: Low confidence price increase prediction

Conclusion

Explainability creates an effective feedback loop between AI and Humans to create Super Intelligence.

  • Financial-AI Engineers can continuously improve Predictive Analytics models.

  • Investors / Traders can apply (overlay) their judgement and analysis to make even better trading decisions.

Semantic Brain has been able to increase Price Change Direction prediction Accuracy and Precision by 10%+ using Explainability. We are confident that Explainability will help us further increase Accuracy and Precision in future (for this and other models).

18 views0 comments

Recent Posts

See All