Course on Financial Technologies and Central Banking

Mexico City, Mexico, November 12–14, 2019

 

Introduction
The first edition of the CEMLA Course on Financial Technologies and Central Banking, was hosted in Mexico City, from November 12 to 14, 2019. The Course had two main objectives; the first objective was to introduce attendees into the new technologies that are currently being developed and used to address central banking problems. The Course covered three main topics: Machine Learning (ML), Complex Networks Science and Distributed Ledger Technology (DLT). As second objective, the Course served as forum to launch an Innovation Hub at CEMLA, with the academic support of University College London (UCL). In this vein, the Course presented use cases developed by regional central banks, that were based in new technologies to address policy and operational issues. These issues include measurement of levels of exposure to systemic risk, characterization of interbank market, price rigidity, anomalous payments detection, RTGS design using DLT, among others.
With the above, the Course’s sessions helped to showcase that new technologies might help central banks to boost and extend its capacities as monitoring and regulatory agents.

Machine Learning in Finance I
This session illustrated the type of problems that ML can address, as summarized below:
Supervised learning, aims to discover a relationship between an output and an input variable from a set of examples; and unsupervised learning that looks to find the underlying structure of the data. The session focused on the supervised learning, starting by defining the typical use scenario, how to train and test the models and some well-established techniques as logistic regression, decision trees, random forests, neural networks, among others. During the session, it was also discussed the bias variance trade-off, where there is needed a model flexible enough to capture data patterns and reduce as much as possible both training and testing errors.  

Machine Learning in Finance II
For the second session, the focus was on unsupervised techniques regarding dimensionality reduction and clustering. The first technique analyzed was Principal Components Analysis (PCA), which aims to capture the real correlation signal in a data matrix by using the K largest eigenvectors (principal components). With that PCA helps to identify the most important information (variance). The second technique analyzed was K-Means clustering which through the measurement of the distance of all the features (commonly using Euclidean distance), makes K clusters, both maximizing the distance between one cluster instances with the rest of the clusters, and minimizing distance within each cluster members.
Also in this session, the Banco de la Republica (Central Bank of Colombia) presented its use case: Identifying the anomalous behavior of large-value payment system participants with artificial neural networks, where using 113 features from financial institutions’ balance sheets, a feed-forwarded neural network was trained, composed by two hidden layers, each one with 60 neurons and sigmoid and softmax activations functions respectively. After trained and tested, this model showed a misclassification error of participants’ anomalous behavior of 11% of a total of approximately 23 thousand samples.

Machine Learning in Finance III
The third session was devoted to Artificial Neural Networks (ANN), which is a technique that looks to reproduce the process of learning of a real neuron. ANN has showed to have accurate prediction power compared with other ML techniques. It has been applied in fields as biology, physics, neuroscience and finance, among others.
The session started with the Perceptron, which is a single-layer neural network that learns a non-linear function used to predict an output variable by using different input variables. The session followed with a more general case, the multilayer perceptron, which its architecture, as the name indicates, is composed by one or more hidden layers, apart from the input and output ones, with different activation functions (hyperbolic tangent (tanh), sigmoid, softmax, rectified linear unit (relu), etc). Finally a more specific type of neural network was introduced; the Autoencoder. This unsupervised neural network aims to learn the most important features of the data. The Autoencoder has applications in image compression and generation, dimensionality reduction, anomalies detection, etc. It works by compressing a dataset into a lower dimensional space (with the encoder function), and accordingly, reconstructing it back into the original space (with the decoder function), using as performance metric the reconstruction error which measures how well the neural network reproduces the input layer.
Continuing with the session, the Banco Central del Ecuador presented its use case: Anomlay Detection in Ecuador Payments System. Using data from Ecuadorian Payments System transactions, an autoencoder was trained to detect anomalous payments, under the hypothesis that most of the transactions are normal and, after training, when the model is feed with an atypical payment, its reconstruction error will be higher than average. For this case, two different autoencoders were trained, both with one hidden layer. The first one used a tanh function as the encoder and a relu function as the decoder. The second one used a relu function for both the encoder and the decoder. They found that the first model outperformed the second one, which could be possibly to the higher complexity of the hyperbolic tangent function. 
Following with the session, the Banco Central de Reserva de El Salvador presented its use case: Oversight of Payment and Settlement System. They applied anomaly detection on data from the RTGS system in order to identify unusual payment behavior. In this case, they combined two unsupervised techniques: Principal Component Analysis (PCA) and K-Means. Under the hypothesis that the data features captures the anomalies behavior, firstly they applied PCA for two principal components to then train a 2 clusters K-means model; resulting in a proportion of 96% of payments in one cluster and 4% in the other. The results indicated that possibly the payments in the second cluster were anomalous.

Multivariate Statistics
In the fourth session, a review of multivariate statistics was presented.
The session started with the PCA, which its main goal is to make sense of a large multivariate system by mapping original variables onto new “synthetic” variables, called principal components. The PCA works through a projection based on pairwise relationship between variables and where the new variables are all uncorrelated. PCA can be ranked in order of importance by the fraction of variance of the original data that they can explain, leading in a dimensionality reduction.
After this, the session continued with the review of the Random Matrix Theory that has as goal to identify informative properties of a matrix and can help to assess if a principal component is noisy or informative.
To conclude the session, Banco Central del Uruguay presented its use case: Price Ridigity with microeconomic data. Through a dataset of retail prices in sectors as beverages, alcoholic beverages, food, tobacco, personal care, etc. and macrodata (CPI index, employment and unemployment rate) they studied price flexibility. After performing PCA they found a high correlation between employment and changes prices, mainly for the food and alcoholic drinks sectors.
Complex Networks
This session was focused in one of the big topics of the course, Complex Networks Science. It started with some real-life example of networks as airport traffic network, social networks, food web and payment systems.
A number of basic concepts to characterize networks were studied, including size, density, degree and its distribution, assortativity, clustering centrality, betweenness centrality and eigenvector centrality. Next, an insight was made into a particular case of networks, Random Graphs, which are benchmarks that allows to understand in a controlled setting the functionality of the system being modeled and tackles issues as poorly detailed information, isolation of properties of interest and closeness to real-life networks.
To continue with the session, Banco Central de Reserva del Perú presented its use case: Testing and characterization of the interbank market with network models. They presented an overview of the Peruvian interbank credits market where unsecured borrowings represents 70% (usually having overnight term), followed by secured borrowings and FX swaps; also they mentioned that an important source of funding for banks is the issuance of fixed incomes instruments in local and international capital markets. After performing Network Analysis, they found that some institutions concentrates lending whereas others concentrates the borrowing, having on average that banks forms triangles with half of its linkages. The Giant Strongly Connected Component (GSCC) consisted on average by 3 or 4 nodes (that represented the biggest banks). Also the Core Size fluctuated between 2 and 3, but in this case, institutions in the core constantly changed. Finally, the study found a clear correlation between size and contagion, where if the biggest banks were removed from the network, the interconnections will be weaker.

Blockchain Technologies I
The sixth session had as main purpose to introduce the Blockchain Technologies.
It started with a review of Bitcoin, a cryptoasset that was born from the idea to have a currency that could be exchanged freely without the need of a financial institution. Bitcoin, enabled by Blockchain, creates an environment where all transactions are kept in a shared, single but highly replicated bookkeeping source called ledger and every participant (node) has a ledger replica, meaning that all nodes are equal and synchronize the ledger periodically by verifying and validating blocks of transaction. In this process, , where the latter process produces new coins are produced and protected by cryptographic keys and only the owner of the private key can spend the coin; with the validity of a block established by the next block attaching to it with a cryptographic sealing (hash chain). The block-chain is the chronological list of all blocks of transactions from the genesis block. In order to correctly work, blockchain systems needs consensus, meaning that participants must agree on the “true content” of the block chain; in this way Bitcoin proposes a “sociological” solution where the truth is established by majority vote, but also from another perspective, Bitcoin majority is expressed in terms of computer power given that one computer represents one vote.
Another kind of protocol are Smart Contracts, which has the purpose to verify and enforce the terms of a contract between two parties, thereby reducing default risk. From the regulation side, blockchain technology combined with smart contracts and rules can produce a decentralized autonomous organization (DAO) that can autonomously operate over blockchain.

Blockchain Technologies II
This session was devoted to the blockchain technologies and its use for regulation. In this vein, was shown that blockchain can provide: access to auditable data, creation of time-stamped, immutable records, creation of a unique source of truth, constitution of a transparent inter-operable environment that could also automatically execute contracts, automatic regulation, nowcasting and real-time regulation, among others. From the policy perspective, it was mentioned that blockchain could help to address: the increase in efficiency and reduction of costs, reduction of systemic, operational and counterparty risk, and the protection of the financial stability.
The session ends with the presentation of Banco Central de Chile use case: Distributed Ledger Technology and a RTGS conceptual design. This project was motivated to increase the number of payments settled electronically in central bank money and reducing inherent risks of alternative means of payment used currently. The case was based in de-constructing each RTGS system component and explore different architecture alternatives using blockchain. They defined the following parameters in order to assess the architectures and select the one that enhances competition to access the RTGS; these were number of participants, concentration ratio, cost of entering the system, costs per transaction, number of intermediaries, transparency, total volume of transactions number of transactions, new services and resilience. After the assessment they found that privacy at transactional level is achievable and it could help to improve interconnectivity levels (due to its past experience with digital bonds issuance). They also found that a wallet per user and per third party structure will have more tradeoffs when each part interacts with the rest. To conclude, it was mentioned that interoperability is a key component in order to enhance innovation in the payment system.

Systemic Risk I
The purpose of this session was to analyze the systemic risk and the different channels of contagion from a Complex Network perspective. It was mentioned that considering direct contagion, cascade models propagates losses only after defaults, but stress can be propagated before the default of the borrower due to the credit quality deterioration. It was also presented how banks can be ranked using network metrics, and it was showed that the most impactful banks are also the most vulnerable, where there are also small banks. Other kind of contagion is the one that propagates due to overlapping portfolios. In this vein, it was mentioned that stress can propagate from one bank to others with common assets through prices and for instance if a bank liquidates its portfolio in a fire-sale, prices will fall and investors with similar portfolio will suffer mark-to-market losses.
To continue with the session, the Banco Central del Uruguay presented its use case: Credit Risk and its effects on the interbank market. The motivation of the project was to build a commercial and financial debt network for Uruguay and provide an empirical quantification on the directly and indirectly effect that default firms have on banks. They built three different networks by using reconstruction methods: Firm-Bank network (links represent financial credit), Bank-Bank network (interbank loans) and Firm-Firm network (in the first case they considered three main debtors and creditors of each firm studied; and in the second case they used imputation methods to complete the relationships between firms). They also applied network measures in order to characterize its networks, and DebtRank to measure the economic value in the network (equity) that is potentially affected by distress or default of a given node. 

Systemic Risk II
This session was focused on Network Reconstruction; it started with a review on Maximum Entropy Networks, which its principal idea is to build an ensemble of networks that retain some properties from the real network, but being as random as possible. This can be achieved by maximizing the entropy associated with the distribution of observing a given network and by subjecting the maximization to constraints for the purpose of retain the properties of the real network. With this analyisis, it is possible to account for network effects even with partial information.
To follow this session, the Banco Central de Bolivia presented its use case: How to measure exposure to systemic risk levels given the connections and interdependencies between financial institutions? In recent years due to the development and the modernization of the infrastructures of the financial system, it is necessary to have a suitable tool to monitor the financial system. In order to identify important actors in the network, a Pólya filter was performed along together with the implementation of PageRank, SinkRank and DebtRank to measure systemic impact.

Systemic Risk III
In the last session of the course, it was presented Network Validation that is a methodology to filter information from noise in large network datasets by identifying a relatively small set of important links. The motivation behind this is that real-world networks are often very large (approximately from 103 to 106 nodes/links) and analyzing or visualizing them can be very hard. This is important because a good validation procedure should retain the multiscale nature of the network. The first method analyzed was the disparity filter which has as null hypothesis that a node distributed its strength uniformly across its outgoing links and each relative weight from the network is tested against Dirichlet distribution. The next method was hypergeometric filter where the null hypothesis is that pairs of nodes form connections at random with fixed strengths and each weight is tested against hypergeometric distribution. Finally, it was analyzed the Pólya filter, which its null hypothesis is that pairs of nodes form connections at random with fixed strengths and fixed degrees (all this based on a Pólya process), each weight is tested against the distribution that results from a Pólya process.

The last case presented was from Banco de la República (Colombia): Measuring potential risk from level of credit default. The purpose of this project was to identify risks in the credit system, in particular to determine the large-scale consequences of non-payment using data from loans in four sectors: commercial, consumer, housing and microcredit. Through a bipartite network, they described the Colombian credit system, having in one side the lenders and in the other the borrowers, after this, they performed network validation (implementing Pólya filter) in order to identify a backbone of links with statistical significance due to the size (capital) of the participants and the characteristics of lenders and borrowers.
Day 1

Opening session and group photo
Dr. Manuel Ramos Francia, Director General, CEMLA

Presentation of the training, Tomaso Aste UCL

Machine learning in finance I, Fabio Caccioli, UCL

Machine learning in finance II, Paolo Barucca, UCL

Identification of anomalous behavior of participants in the high value payment system by an Artificial Neural Networks model. Colombia Use Case

Machine learning in finance III, Fabio Caccioli, UCL

Atypical Payments Alerting Model. Ecuador use case

Data integration and toolkit for payment system oversight. El Salvador use case

Discussion on machine learning in finance, Tomaso Aste, UCL

Day 2

Multivariate statistics, Giacomo Livan, UCL  

Determine price rigidity through micro economic variables. Uruguay use case

Networks, Fabio Caccioli, UCL

Testing and characterization of the interbank market with network models. Peru use case

Blockchain Technologies I, Tomaso Aste, UCL

Blockchain Technologies II, Tomaso Aste, UCL

Distributed Ledger Technology RTGS conceptual design. Chile Use Case

Discussion on blockchain technologies and distributed systems for regulation, Tomaso Aste, UCL

Day 3

Systemic risk I, Fabio Caccioli, UCL

Credit risk and its effects on interbank market. Uruguay use case

Systemic risk II, Fabio Caccioli, UCL

Measure levels of exposure to systemic risk. Bolivia Use case

Systemic risk III, Giacomo Livan, UCL

Effect of non-payment in credit on the Financial Stability. Colombia use case 

Discussion on systemic risk, Tomaso Aste, UCL

Conclusions

The main purpose of the Course was to serve as a platform for the establishment of an Innovation Hub where the Central Banks of Latin America and the Caribbean studies and tests new technologies to boost and expand their capacities as monitoring and regulation agents.

Tomaso Aste
 Professor of Complexity Science at UCL Computer Science Department. A trained Physicist, has substantially contributed to research in complex structures analysis, financial systems modelling, artificial intelligence and machine learning. He is passionate in the investigation of the effect of technologies on socio-economic systems and currently he focuses on peer-to-peer and distributed systems. Prof. Aste is leading research on complexity science and network theory applied to socio-economic systems. He works with regulators on the application of FinTech and Blockchain to financial regulation. He is co-founder and Scientific Director of the UCL Centre for Blockchain Technologies, founder and Head of the Financial Computing and Analytics Group at UCL, Member of the Board of the ESRC LSE-UCL Systemic Risk Centre and Member of the Board of the Whitechapel Think Thank. He collaborates with the Financial Conduct Authority, The Bank of England and HMRC. He contributes the All-Party Parliamentary Group on FinTech. He is leading an initiative for training to FinTech central bankers and regulators across South America. He is advisor and consultant for several financial companies, banks, FinTech firms and digital-economy start-ups.

Paolo Barucca
Lecturer in the Department of Computer Science at University College London since March 2018. He worked as a postdoc in financial applications of network science and statistical physics at Scuola Normale Superiore and at the University of Zurich. He did his PhD at the University of Rome, Sapienza under Prof Giorgio Parisi, specializing in the statistical physics of disordered and complex systems. He is currently researching systemic risk in financial networks and statistical learning via statistical physics and random matrix theory.

Giacomo Livan
Senior Research Fellow at the Department of Computer Science of UCL. He obtained a PhD in Theoretical Physics from the University of Pavia (Italy) in 2012, after which he joined the UNESCO-funded Abdus Salam International Centre for Theoretical Physics in Trieste (Italy).
He joined UCL in 2014, and in 2015 he was awarded a research Fellowship from the UK Engineering and Physical Sciences Research Council. Giacomo's research activity focuses on applying methods and ideas borrowed from Complex Systems Science to the quantitative analysis of socio-economic systems.

Fabio Caccioli
Associate professor in the Department of Computer Science at University College London. Prior to joining UCL, he has been a research associate in the Centre for Risk Studies, University of Cambridge, and a postdoctoral fellow at the Santa Fe Institute (Santa Fe, US). Fabio holds a PhD in Statistical Physics from the International School for Advanced Studies (Trieste, Italy), and he obtained an MSc in Theoretical Physics and BSc in Physics from Università degli Studi di Parma (Parma, Italy). His research focuses on the application of statistical mechanics and complex networks to the study of economic and financial systems, in particular on systemic risk and financial stability.