We hope this investor update finds you well. We would like to share some exciting news and developments about our trading strategies and the encouraging results we have achieved during the testing of a completely new algorithm.
As you may have noticed, our strategy offered through the Darwinex and Capriva services has not performed well in the recent past compared to previous performance. We understand that you have entrusted your capital, and we take the responsibility very seriously. We are constantly monitoring the financial markets and evaluating our strategy to help produce performance that meets your expectations and our standards.
One of the reasons why our strategy has struggled recently is because of the changes in the financial markets that have occurred in the last few years. The financial markets have become more dynamic and unpredictable than ever before, due to various factors, such as:
The increased volatility and uncertainty caused by the Covid-19 pandemic in 2020/21 disrupted the global economy and affected many sectors and industries.
The subsequent shifts in monetary and fiscal policies by central banks and governments around the world continue to influence interest rates, inflation, and exchange rates, all having an impact on the movements of inter-connected asset groups.
The rapid development and adoption of new technologies, such as artificial intelligence, blockchain, and cryptocurrencies, have created new opportunities, but also produced challenges for investors and traders, as their impacts affect market dynamics.
These changes have posed new difficulties for traditional algorithmic trading methods that rely on fixed rules and historical data. These methods are unable to cope well with the changing market conditions and to capture the emerging patterns and trends. These are the same methods that we have used historically of course, and we therefore realised that we needed a strategy that is better able to adapt to changing market conditions and to learn from its own experience. If possible, this would serve us and our investors well, being able to cope with unknown factors at any future point in time.
These are the same methods that we have used historically of course, and we therefore realised that we needed a strategy that is better able to adapt to changing market conditions and to learn from its own experience.
That is why in 2022 we decided to invest heavily in developing a trading algorithm using artificial intelligence (AI). AI is a branch of computer science that aims to create machines or systems that can perform tasks that usually require human intelligence, such as learning, reasoning, and decision-making. AI has been making remarkable progress in recent years, thanks to advances in computing power, data availability, and algorithm design.
Initially, we focussed our efforts on one of the most widely researched areas of AI called ‘supervised learning’. Here we used neural networks to predict imminent market movements over a given time horizon. However, these predictions still relied on the construction of fixed algorithmic rules to decide how to apply those predictions to trading decisions. This capability turned out to be deceptively non-trivial, and indeed is something that many scholars and industry researchers have also only had limited success with in financial market contexts.
For this reason, we switched our focus to one of the most promising and upcoming areas of AI called reinforcement learning (RL). This is a type of machine learning that enables an agent to learn from its actions and feedback from the environment. The learning process behind RL is inspired by how humans and animals learn from trial and error, by rewarding good behaviors and punishing bad ones. RL has been successfully applied to various domains, such as games, robotics, self-driving cars, healthcare, and finance.
The key difference between the supervised learning branch of AI, and RL is key to understand. RL does not merely make predictions. It actually ‘learns’ what actions are required (and when) to maximise reward. In short, in our context, it means that it ‘learns to trade’.
One of the most famous examples of RL is DeepMind’s AlphaGo, which is a computer program that learned how to master the ancient board game Go by playing millions of games against itself. In 2016, AlphaGo defeated Lee Sedol, one of the world’s best Go players, in a historic match that demonstrated the power and potential of RL. AlphaGo was able to discover new strategies and moves that no human player had ever seen or used before, and to outsmart its human opponent with its creativity and intuition.
Another example of RL is DeepMind’s AlphaFold, which is an algorithm that learns how to predict the three-dimensional structure of proteins by using data from previous experiments. In 2020, AlphaFold achieved a breakthrough in solving one of the most challenging problems in medical science, which could have huge implications for drug discovery and disease treatment. AlphaFold was able to accurately predict the shape of proteins that are essential for life and to surpass the previous performance of human experts and other methods.
RL does not merely make predictions. It actually ‘learns’ what actions are required, and when, to maximise reward. In short, in our context, it means that it ‘learns to trade’
In finance, RL has also been gaining popularity and attention as a way to design adaptive and intelligent trading algorithms. However, due to the proprietary and commercially sensitive nature in this sector, research and development is much less often made public. There have however been several studies in the academic space that have shown RL to be significantly more effective than traditional algorithmic trading techniques.
One of the main challenges of RL is how to deal with complex and high-dimensional state spaces, which are the sets of possible situations that an agent can encounter. For example, in financial markets, an agent needs to consider many factors, such as prices, volumes, indicators, etc., when making trading decisions. To cope with this challenge, we decided to use deep Q-learning (DQL), which is a combination of RL and deep neural networks (DNNs).
DNNs are a type of artificial neural network (ANNs) that consist of multiple layers of interconnected nodes that can learn complex patterns from data. DNNs have been widely used for various tasks, such as image recognition, natural language processing, and speech synthesis. DQL uses DNNs to approximate the Q-function, which is a function that estimates the expected future reward for each action given a state. By using DNNs, DQL can handle large state spaces and learn nonlinear relationships between states and actions.
We have been developing our DQL trading algorithm for many months now, using Keras and TensorFlow, which are popular frameworks for building deep learning models. We have tested our algorithm predominantly on the FX markets which is where we have focussed most of our efforts in the past. The out-of-sample results we have obtained in testing have impressed us and show significant improvements over our previous strategy and also outperform other market benchmarks.
Our DQL algorithm has shown the following advantages over our previous strategy:
It is more flexible and adaptable to changing market conditions, as it can learn from its own experience and feedback from the environment.
It is more efficient and effective at exploiting market opportunities, as it can optimise its actions in its endeavors to maximise rewards.
It is more robust and resilient to market noise and anomalies, as it can filter out irrelevant information and focus on more significant factors.
Although we have conducted extensive backtesting and validation on historical data, it has taken us longer than we would have liked to start trading live with our DQL algorithm, since the nuances of RL caused some complexities with significant fine-tuning and optimising of the parameters and hyperparameters being required. We also want to finalise the implementation of a risk management module and the performance evaluation tools to monitor and control our algorithm before releasing into production.
This being said, we aim to go live with our DQL algorithm in the Dec 2023/ Jan 2024 timeframe, and we are very excited and optimistic about the prospects of the strategy. We believe that our DQL algorithm will be able to adapt to changing market conditions in the future and to learn from its own experience, thus enhancing trading performance and generating much more consistent returns for our clients.
We are proud to be at what we consider the forefront of actionable AI innovation in finance, and we felt it was time to share with you the principles behind our new AI-powered strategy.
None of the products, services, or information provided by this website constitute financial investment advice and are not advice to invest in any financial products or derivatives.