L'examen du CFA niveau II consiste en deux séries de 11 item-sets pour chaque session (l'une le matin et la deuxième l'après-midi) comprenant chacune 44 questions à choix multiples, soit 88 questions au total et près 4 questions par item-set en moyenne.
L'examen du CFA niveau II dure 4 heures et 24 minutes, réparties en deux sessions égales de 2 heures et 12 minutes, une le matin et la deuxième l'après-midi avec une pause facultative entre les deux.
Pour plus d'information: https://www.cfainstitute.org/en/programs/cfa/exam/level-ii
|ETHICAL AND PROFESSIONAL STANDARDS||10-15%|
|FINANCIAL STATEMENT ANALYSIS||10-15%|
Ce module fournit aux candidats au CFA niveau 2 les principes et les outils utilisés pour évaluer les titres à revenu fixe par le biais de l'arbitrage.
On y présente l'arbre binomial des taux d'intérêt comme un outil précieux pour évaluer à la fois les obligations non assorties d’options et les obligations avec options intégrées.
Les principaux points abordés sont l’assimilation des titres à revenu fixe à un portefeuille d'obligations zero-coupon, l'élimination des opportunités d'arbitrage sur des marchés efficients, l'approche d’absence d’opportunité d’arbitrage aux fins de pricing des obligations.
La méthode Monte Carlo en tant que méthode d'évaluation des produits de taux y est également développée.
En outre, on explore les modèles de structure par terme des taux d’intérêt afin d’expliquer la forme de la courbe des taux et la distinction entre les modèles sans arbitrage et les modèles d'équilibre dans l'évaluation des obligations assorties d’options.
Ce module présente les principes et les techniques de valorisation par arbitrage des titres à revenu fixe, en mettant l'accent particulier sur l'arbre binomial.
Les points clés abordés comprennent les éléments suivants :
Tout d'abord, la valorisation des actifs financiers repose sur le principe fondamental selon lequel leur valeur est déterminée par la valeur actuelle des flux de trésorerie futurs attendus.
Sur les marchés, les prix s'ajustent pour éliminer toute opportunité d'arbitrage, transaction produisant un bénéfices sans risque et sans aucune sortie de trésorerie et donc financée par
Pour les obligations sans options incorporées, la valeur déterminée par arbitrage est simplement la valeur actuelle des flux de trésorerie futurs attendus, calculée à l'aide des taux de référence du marché.
En outre, l'arbre binomial offre un cadre où le taux d'intérêt à court terme peut prendre deux valeurs possibles, en suivant 3 hypothèses: un modèle de taux d'intérêt régissant le processus aléatoire des taux d'intérêt, le niveau supposé de volatilité des taux d'intérêt et la courbe des taux de référence actuelle.
Dans l'arbre des taux d'intérêt, les taux d'intérêt adjacents sont des multiples de e élevé à la puissance de 2σ, dérivés de la distribution lognormale.
Pour déterminer la valeur d'une obligation à un nœud spécifique, on utilise la méthodologie de valorisation appelée induction en sens inverse, en commençant par l'échéance et en remontant de droite à gauche.
Il est important d'étalonner l'arbre des taux d'intérêt en fonction de la courbe des taux du moment en sélectionnant des taux d'intérêt assurant une valorisation sans arbitrage possible.
De plus, les obligations non assorties d’options, lorsqu'elles sont évaluées à l'aide de l'arbre binomial, sont censées avoir la même valeur que lorsqu'elles sont actualisées en utilisant les taux spot.
En revanche, la méthode de Monte Carlo offre une approche alternative pour simuler un grand nombre de chemins potentiels des taux d'intérêt afin de comprendre comment la valeur d'un titre est affectée.
Elle consiste à sélectionner aléatoirement des chemins pris par les taux et d’estimer la valeur finale de l’obligation à partir d’une moyenne des prix de l’obligation à part des chemins estimés.
Enfin le module aborde les modèles de structure par terme des taux d’intérêt, y compris les modèles d'équilibre général et les modèles d’évaluation par arbitrage, qui visent à expliquer la forme de la courbe des taux et sont utilisés pour valoriser les obligations et les obligations assorties d’options.
Il convient de noter que les modèles d’évaluation par arbitrage sont couramment utilisés pour valoriser les obligations avec des options intégrées.
Contrairement aux modèles d'équilibre, ils commencent avec les prix observés sur le marché d'un ensemble de référence d'instruments financiers, en supposant que cet ensemble de référence est correctement évalué.
Les options intégrées sont des droits attachés à une obligation qui peuvent être exercés par l'émetteur, le détenteur de l'obligation ou automatiquement en fonction des mouvements des taux d'intérêt.
Ces options peuvent être simples, comme les options d'achat, de vente et d'extension, ou complexes, impliquant des combinaisons d'options.
L'évaluation des obligations assorties d'options intégrées nécessite de prendre en compte les valeurs obtenues à partir du modèle d’évaluation sans arbitrage à la fois de l'obligation simple et de chaque option intégrée.
La valeur d'une obligation remboursable par anticipation diminue en raison de l'option d'achat de l'émetteur, tandis que la valeur d'une obligation remboursable par anticipation augmente en raison de l'option de vente du détenteur de l'obligation.
La volatilité des taux d'intérêt affecte les options intégrées et un arbre binomial de taux d'intérêt est utilisé pour modéliser la volatilité.
L'évaluation des obligations avec options intégrées implique la génération d'un arbre binomial, la détermination de l'exercice de l'option à chaque nœud et l'utilisation de l'induction à rebours pour l'évaluation.
L'OAS (option adjusted spread) et la duration effective sont utilisés pour ajuster le prix de l’obligation compte tenu de la présence d’options assorties et dévaluer la sensibilité du prix de l’obligation à la variation des taux.
Les obligations convertibles présentent des caractéristiques uniques, telles que le prix et le ratio de conversion, et leur évaluation nécessite la prise en compte des caractéristiques de l'obligation, de l'action et de l'option.
Le profil risque-rendement des obligations convertibles dépend du prix de l'action sous-jacente par rapport au prix de conversion. L'évaluation des obligations convertibles suit le cadre de l'absence d'arbitrage, et chaque composante peut être évaluée séparément.
Dans le contexte de l'analyse de crédit, plusieurs sujets importants sont abordés dans ce module.
Ces points clés peuvent être résumés comme suit :
Un aspect de la modélisation du risque de crédit implique la prise en compte de trois facteurs cruciaux : l'exposition attendue au défaut, le taux de recouvrement et la perte en cas de défaut.
Pour évaluer avec précision le risque de crédit d'une obligation, on calcule l'ajustement de la valeur du crédit (CVA). Il s'agit d'additionner les valeurs actuelles des pertes attendues sur la durée de vie restante de l'obligation.
Des probabilités risque-neutre sont utilisées et l'actualisation se fait à des taux sans risque.
La CVA sert de compensation pour la prise en charge du risque de défaut et peut être exprimée sous la forme d'une marge de crédit.
Les scores et les notations de crédit sont des évaluations fournies par des tiers qui évaluent la solvabilité de personnes ou d'entités.
Ces notations émises par les agences de rating sont utilisées sur différents marchés.
Les analystes de crédit utilisent les notations de crédit et les probabilités de transition pour ajuster le rendement d'une obligation, en reflétant les probabilités de migration du crédit.
Il est important de noter que la migration de l'écart de crédit entraîne généralement une réduction du rendement attendu.
Les modèles d'analyse de crédit se répartissent en deux grandes catégories : les modèles structurels et les modèles à forme réduite.
Les modèles structurels considèrent les positions des parties prenantes du point de vue des options, tandis que les modèles de forme réduite se concentrent sur la probabilité des défaut en utilisant des variables observables.
Lorsque l'on suppose que les taux d'intérêt sont volatils, un cadre d'évaluation sans arbitrage peut être utilisé pour estimer le risque de crédit associé à une obligation.
Pour les obligations à taux variable, la marge d'escompte est similaire à l'écart de crédit observé pour les obligations à coupon fixe.
Cette marge d'escompte peut être calculée à l'aide d'un cadre de valorisation sans arbitrage.
Les méthodes d'évaluation sans arbitrage sont utiles pour évaluer la sensibilité de la marge de crédit aux variations des paramètres du risque de crédit.
On note aussi que la structure par terme des spreads de crédit est influencée par des facteurs macro et microéconomiques.
Des conditions économiques peu favorables tendent à se traduire par une pentification de la courbe des spreads de taux.
La forme de la courbe des spreads de crédit est également influencée par la dynamique du marché, comme l'offre et la demande, et les titres fréquemment négociés.
Des facteurs spécifiques à l'émetteur ou au secteur, tels que des événements susceptibles de réduire l'effet de levier à l'avenir, peuvent avoir un impact sur la forme de la courbe des écarts de crédit, entraînant son aplatissement ou son inversion.
Les obligations qui présentent un risque de défaillance élevé se négocient souvent à un niveau proche de leur valeur de recouvrement à différentes échéances.
Par conséquent, la courbe des spreads de crédit fournit moins d'informations sur la relation entre le risque de crédit et la maturité dans de tels cas.
Lorsqu'ils analysent la dette titrisée, les analystes de crédit prennent en compte différents facteurs.
Ceux-ci incluent la concentration des actifs, la similarité ou l'hétérogénéité des actifs en termes de risque de crédit, et d'autres caractéristiques pertinentes pour prendre des décisions d'investissement éclairées.
Ce module se concentre sur les credit default swaps (CDS).
Un swap de défaut de crédit (CDS) est un accord contractuel entre deux parties, par lequel l'une d'entre elles cherche à se protéger contre les pertes potentielles résultant de la défaillance d'un emprunteur dans un délai déterminé.
Les principaux aspects des CDS sont les suivants :
Un CDS porte sur la dette d'une tierce partie, appelée entité de référence (underlying reference).
Plus précisément, il s'agit d'une obligation senior non garantie, appelée obligation de référence.
En outre, un CDS fournit généralement une couverture pour toutes les obligations de l'entité de référence qui ont un rang égal ou supérieur.
Les deux parties impliquées dans un CDS sont l'acheteur et le vendeur de protection du crédit. Plus précisément, l'acheteur est dit être « short » sur la qualité de crédit de l'entité de référence, tandis que le vendeur est dit être « long » sur la qualité de crédit de l'entité de référence.
En outre, un événement de crédit déclenche le paiement d'un CDS. Ces événements de crédit peuvent être une faillite, un défaut de paiement et, dans certains cas, une restructuration involontaire.
Le règlement d'un CDS peut se faire soit par un paiement en espèces, soit par une livraison physique.
Dans le cas d'un règlement en espèces, il est déterminé soit par l'obligation la moins chère à livrer, soit par une vente de la dette de l'entité de référence.
En termes d'évaluation, un CDS implique l'estimation de la valeur actuelle de la jambe de paiement (paiements effectués par l'acheteur de protection au vendeur de protection) et de la jambe de protection (paiement du vendeur de protection à l'acheteur de protection en cas de défaillance).
La comparaison de ces valeurs détermine si une prime initiale est payée par l'acheteur ou le vendeur.
Le taux d'aléa, qui représente la probabilité de défaillance en l'absence de défaillance antérieure, joue un rôle important dans la détermination de la valeur des paiements attendus dans le cadre d'un CDS.
En outre, les prix des CDS sont souvent exprimés en termes de spreads de crédit, qui indiquent la compensation reçue par le vendeur de protection de la part de l'acheteur de protection.
Ces spreads de crédit sont généralement exprimés par une courbe des spreads, qui montre la relation entre les écarts de crédit pour des obligations de différentes échéances provenant du même emprunteur.
A noter que la valeur d'un CDS évolue dans le temps en fonction de la qualité de crédit de l'entité de référence, ce qui se traduit par des gains ou des pertes pour les parties concernées, même en l'absence de défaillance.
À l'approche de l'échéance du CDS, ses écarts tendent à se rapprocher de zéro.
Les gains ou pertes accumulés dans un CDS peuvent être monétisés en prenant une position opposée à la position initiale.
En outre, les CDS sont utilisés pour ajuster les expositions au risque de crédit et capitaliser sur les différentes évaluations des coûts de crédit associés à divers instruments liés à l'entité de référence, tels que la dette, les actions et les produits dérivés.
Ce topic fournit une base aux candidats du CFA niveau 2 pour comprendre le pricing et la valorisation des contrats à terme de gré à gré, des contrats à terme standardisés et des swaps.
Elle explore les points clés en discutant des règles suivies par les arbitragistes, de l'approche de non-arbitrage utilisée pour la tarification et l'évaluation, et des hypothèses faites dans ce contexte.
Des modèles d'arbitrage de portage sont utilisés et une distinction est faite entre la fixation du prix et l'évaluation. Le prix du contrat à terme est déterminé pour atteindre une valeur de zéro, et des principes similaires s'appliquent aux contrats à terme.
La valeur d'un engagement à terme dépend de différents facteurs. Les contrats de taux à terme (FRA) sont introduits en tant que contrats à terme sur les taux d'intérêt.
Les actions et les titres à revenu fixe font l'objet de considérations spécifiques en matière de tarification. Les swaps sont tarifés et évalués en utilisant des portefeuilles de réplication ou de compensation, et la valeur d'un swap de taux d'intérêt est calculée sur la base de taux de swap fixes.
Les swaps de devises et les swaps d'actions suivent des approches similaires. En comprenant ces concepts, les lecteurs peuvent prendre des décisions éclairées sur les marchés financiers en ce qui concerne la tarification et l'évaluation des engagements à terme.
This module serves as a fundamental guide to understanding the valuation of contingent claims, specifically focusing on the valuation of different options.
It covers key points including the adherence of arbitrageurs to essential rules of avoiding personal funds and price risk. The valuation process follows the no-arbitrage approach, which is based on the concept of the law of one price.
The reading assumes certain conditions, such as the availability of identifiable and investable replicating instruments, the absence of market frictions, permission for short selling and borrowing/lending at a known risk-free rate, and a known distribution for the underlying instrument's price.
The reading introduces the two-period binomial model and its relationship to three one-period binomial models positioned at different times. It discusses valuation approaches for European-style and American-style options, highlighting the use of the expectations approach for European-style options and the no-arbitrage approach for both types.
American-style options are influenced by early exercise when working backward through the binomial tree. Interest rate options are valued using a modified Black futures option model, considering factors like the underlying being a forward rate agreement (FRA), accrual period adjustments, and day-count conventions.
The Black-Scholes-Merton (BSM) option valuation model assumes the underlying instrument follows geometric Brownian motion, resulting in a lognormal distribution of prices. The BSM model is interpreted as a dynamically managed portfolio consisting of the underlying instrument and zero-coupon bonds. It explains the significance of N(d1) and N(d2) in the BSM model, relating them to replication of options, delta determination, risk-neutral probability estimation, and the impact on option values.
The Black futures option model is applied when the underlying is a futures or forward contract. Valuation of interest rate options is achieved using an adjusted version of this model, incorporating FRAs and considering day-count conventions and underlying notional amounts. The reading introduces interest rate caps and floors, which are portfolios of interest rate call and put options with sequential maturities and the same exercise rate. Swaptions, options on swaps, are also discussed, distinguishing between payer and receiver swaptions.
Risk measures, including delta, gamma, theta, vega, and rho, are defined and their roles in option valuation are explained. Delta hedging and the concept of a delta-neutral portfolio are discussed. The estimation of option price changes using delta approximation and the error estimation using gamma are explored. The reading emphasizes the importance of gamma in capturing non-linearity risk and the use of a delta-plus-gamma approximation for better price estimation.
Furthermore, the reading touches upon theta as a measure of option value change over time, vega as the sensitivity to volatility changes, and rho as the sensitivity to changes in the risk-free interest rate. It acknowledges that historical volatility can be estimated but lacks predictability for future volatility. Implied volatility is defined as the BSM model volatility that corresponds to the market option price, reflecting market participants' beliefs about future underlying volatility.
The reading introduces the concepts of the volatility smile and volatility surface. The volatility smile represents the implied volatility plotted against the exercise price, while the volatility surface expands it to a three-dimensional plot, incorporating expiration time as an additional dimension. It highlights that the observed volatility surface deviates from the expectation of a flat surface predicted by the BSM model assumptions.
Multiple linear regression is a statistical technique utilized to model the linear relationship between a dependent variable and two or more independent variables. It finds practical application in explaining financial variables, testing theories, and making forecasts.
The process of conducting multiple regression involves several decision points, including identifying the dependent and independent variables, selecting an appropriate regression model, testing underlying assumptions, evaluating goodness of fit, and making necessary adjustments.
In a multiple regression model, the equation represents the relationship as Yi = b0 + b1X1i + b2X2i + b3X3i + ... + bkXki + εi, where Y is the dependent variable and X1 to Xk are the independent variables. The model is estimated using n observations, where i ranges from 1 to n.
The intercept coefficient, denoted as b0, represents the expected value of Y when all independent variables are set to zero. On the other hand, the slope coefficients (b1 to bk) are also known as partial regression coefficients, describing the impact of each independent variable (Xj) on Y while holding other independent variables constant.
To ensure the validity of multiple regression models, several assumptions must be met.
These assumptions include: (1) linearity of the relationship between the dependent and independent variables, (2) homoskedasticity (equal variance of errors) across all observations, (3) independence of errors, (4) normality of error distribution, and (5) independence of the independent variables.
Diagnostic plots can aid in assessing whether these assumptions hold true.
Scatterplots of the dependent variable against the independent variables help identify non-linear relationships, while residual plots assist in detecting violations of homoskedasticity and independence of errors.
In multiple regression analysis, the adjusted R2 is utilized as a measure of how well the model fits the data, as it accounts for the number of independent variables included in the model.
Unlike the regular R2, the adjusted R2 does not automatically increase as more independent variables are added.
The adjusted R2 will increase or decrease when a variable is added to the model, depending on whether its coefficient's absolute value of the t-statistic is greater or less than 1.0.
To assess and select the "best" model among a group with the same dependent variable, other criteria like Akaike's information criterion (AIC) and Schwarz's Bayesian information criteria (BIC) are commonly used. AIC is preferred when the goal is prediction, while BIC is favored for evaluating goodness of fit. In both cases, lower values indicate better model performance.
When it comes to hypothesis testing of individual coefficients in multiple regression, the procedures using t-tests remain the same as in simple regression.
However, for jointly testing a subset of variables, the joint F-test is employed. This test compares a "restricted" model, which includes a narrower set of independent variables nested within the broader "unrestricted" model.
The null hypothesis states that the slope coefficients of all independent variables excluded from the restricted model are zero. The general linear F-test expands on this concept, examining the null hypothesis that all slope coefficients in the unrestricted model are equal to zero.
Predicting the value of the dependent variable using a multiple regression model follows a similar process to simple regression. The estimated slope coefficients, multiplied by the assumed values of the independent variables, are summed, and the estimated intercept coefficient is added.
In multiple regression, the confidence interval around the forecasted value of the dependent variable accounts for both model error and sampling error, which arises from forecasting the independent variables. The larger the sampling error, the wider the standard error of the forecast of Y, resulting in a wider confidence interval.
Principles for proper regression model specification encompass several aspects, including the economic reasoning behind variable choices, the concept of parsimony, achieving good out-of-sample performance, selecting an appropriate model functional form, and ensuring no violations of regression assumptions.
Instances of failures in regression functional form often arise from different factors, such as omitted variables, inappropriate forms of variables, improper variable scaling, and unsuitable data pooling. These failures can lead to violations of regression assumptions.
Heteroskedasticity occurs when the variance of regression errors varies across observations. Unconditional heteroskedasticity refers to situations where the error variance is not correlated with the independent variables, while conditional heteroskedasticity emerges when the error variance correlates with the values of the independent variables.
Unconditional heteroskedasticity does not pose significant issues for statistical inference. However, conditional heteroskedasticity is problematic as it leads to an underestimation of the standard errors of regression coefficients, resulting in inflated t-statistics and a higher likelihood of Type I errors.
Detecting conditional heteroskedasticity can be accomplished using the Breusch-Pagan (BP) test, and the bias it introduces into the regression model can be rectified by computing robust standard errors.
Serial correlation, also known as autocorrelation, occurs when regression errors are correlated across observations. It can be a significant problem in time-series regressions, leading to inconsistent coefficient estimates and underestimation of standard errors, which in turn inflates t-statistics (similar to conditional heteroskedasticity).
The Breusch-Godfrey (BG) test provides a robust method for detecting serial correlation. This test utilizes residuals from the original regression as the dependent variable, running them against initial regressors plus lagged residuals. The null hypothesis (H0) in this case is that the coefficients of the lagged residuals are zero.
Biased estimates of standard errors caused by serial correlation can be rectified using robust standard errors, which also account for conditional heteroskedasticity.
Multicollinearity occurs when there are high pairwise correlations between independent variables or when three or more independent variables form approximate linear combinations that are highly correlated. Multicollinearity leads to inflated standard errors and reduced t-statistics.
The variance inflation factor (VIF) serves as a measure for quantifying multicollinearity. A VIF value of 1 for a specific independent variable (Xj) indicates no correlation with the other regressors. If VIFj exceeds 5, further investigation is warranted, and a VIFj value exceeding 10 indicates serious multicollinearity requiring correction.
Potential solutions to address multicollinearity include dropping one or more of the regression variables, using alternative proxies for certain variables, or increasing the sample size.
Regression results can be influenced by two types of observations: high-leverage points, characterized by extreme values of independent variables, and outliers, characterized by extreme values of the dependent variable.
To identify high-leverage points, leverage is used as a measure. If the leverage exceeds 3 times the average leverage (3/n), where n is the number of observations, then the observation is considered potentially influential. On the other hand, outliers can be identified using studentized residuals. If the studentized residual is greater than the critical value of the t-statistic with n - k - 2 degrees of freedom, the observation is potentially influential.
Cook's distance, also known as Cook's D (Di), is a metric that quantifies the impact of individual data points on the regression results. It measures how much the estimated regression values change if a specific observation (i) is removed. If Di is greater than 2 times the square root of k/n, where k is the number of independent variables and n is the number of observations, then the observation is highly likely to be influential. An influence plot can be used to visually analyze the leverage, studentized residuals, and Cook's D for each observation.
Dummy variables, also called indicator variables, are utilized to represent qualitative independent variables. They take a value of 1 to indicate the presence of a specific condition and 0 otherwise. When including n possible categories, the regression model must incorporate n - 1 dummy variables.
Intercept dummy variables modify the original intercept based on specific conditions.
When the intercept dummy is 1, the regression line shifts up or down parallel to the base regression line. Similarly, slope dummy variables allow for a changing slope under specific conditions. When the slope dummy is 1, the slope changes according to (dj + bj) × Xj, where dj represents the coefficient of the dummy variable and bj is the slope of Xj in the original regression line.
Logistic regression models are employed when the dependent variable is qualitative or categorical. They are commonly used in binary classification problems encountered in machine learning and neural networks. In logistic regression, the event probability (P) undergoes a logistic transformation into the log odds, ln[P/(1 − P)]. This transformation linearizes the relationship between the transformed dependent variable and the independent variables.
Logistic regression coefficients are typically estimated using the maximum likelihood estimation (MLE) method. The slope coefficients are interpreted as the change in the log odds of the event occurring per unit change in the independent variable, while holding all other independent variables constant.
The reading introduces various aspects of time series modeling and forecasting. In a linear trend model, the predicted trend value of a time series in period t follows the equation bˆ0 + bˆ1t, while in a log-linear trend model, it is e bˆ0+bˆ1t. Different types of trend models are suitable for time series with constant growth in amount or constant growth rate.
Trend models may not fully capture the behavior of a time series, as indicated by serial correlation of the error term. If the Durbin-Watson statistic significantly differs from 2, suggesting serial correlation, an alternative model should be considered.
Autoregressive models (AR) use lagged values to predict the current value of a time series. A time series is considered covariance stationary if its expected value, variance, and covariance remain constant and finite over time. Nonstationary time series may exhibit trends or non-constant variance. Linear regression is valid for estimating autoregressive models only if the time series is covariance stationary.
For a specific autoregressive model to fit well, the autocorrelations of the error term should be zero at all lags. Mean-reverting time series tend to fall when above their long-run mean and rise when below it. Covariance stationary time series exhibit mean reversion.
Forecasts in autoregressive models can be made for future periods based on past values. Out-of-sample forecasts are more valuable for evaluating forecasting performance than in-sample forecasts. The root mean squared error (RMSE) is a criterion for comparing forecast accuracy.
The coefficients in time series models can be unstable across different sample periods, so selecting a stationary sample period is important. A random walk is a nonstationary time series where the value in one period is the previous value plus a random error. A random walk with drift includes a nonzero intercept. Random walks have unit roots and are not covariance stationary.
Transforming a time series through differencing can sometimes make it covariance stationary. Moving averages use past values of a time series to calculate its current value, while moving-average models (MA) use lagged error terms for prediction. The order of an MA model can be determined by examining the autocorrelations.
Autoregressive and moving-average time series exhibit different patterns in autocorrelations. Seasonality can be modeled by including seasonal lags in the model. ARMA models have limitations, including unstable parameters and difficulty in determining the AR and MA order.
Autoregressive conditional heteroskedasticity (ARCH) refers to the variance of the error term depending on previous errors. A test can be conducted to identify ARCH(1) errors.
Linear regression should be used cautiously when time series have unit roots or are not cointegrated. The Dickey-Fuller test can determine whether time series are cointegrated.
Machine learning methods are increasingly being used in various stages of the investment management value chain. These methods aim to extract knowledge from large datasets by identifying underlying patterns and making predictions without human intervention.
Supervised learning relies on labeled training data, with observed inputs (X's or features) and associated outputs (Y or target). It can be categorized into regression, which predicts continuous target variables, and classification, which deals with categorical or ordinal target variables.
Unsupervised learning algorithms, on the other hand, work with unlabeled data and infer relationships between features, summarize them, or reveal underlying structures in the data.
Common applications of unsupervised learning include dimension reduction and clustering.
Deep learning utilizes sophisticated algorithms and neural networks to address complex tasks such as image classification and natural language processing. Reinforcement learning involves a computer learning through interaction with itself or data generated by the same algorithm.
Generalization refers to an ML model's ability to maintain its predictive power when applied to new, unseen data. Overfitting is a common issue where models are overly tailored to the training data and fail to generalize. Bias error measures how well a model fits the training data, variance error quantifies the variation of model results with new data, and base error arises from randomness in the data. Out-of-sample error combines bias, variance, and base errors.
To address the holdout sample problem, k-fold cross-validation shuffles and divides the data into k subsets, using k-1 subsets for training and one subset for validation. Regularization methods reduce statistical variability in high-dimensional data estimation or prediction by reducing model complexity.
LASSO is a popular penalized regression technique that assigns a penalty based on the absolute values of regression coefficients, promoting feature selection.
Support vector machines (SVM) aim to find the optimal hyperplane for classification tasks. K-nearest neighbor (KNN) is a supervised learning method used for classification by comparing similarities between new observations and existing data points. Classification and regression trees (CART) are utilized for predicting categorical or continuous target variables.
Ensemble learning combines predictions from multiple models to improve accuracy and stability. Random forest classifiers consist of decision trees generated through bagging or random feature reduction. Principal components analysis (PCA) reduces correlated features into uncorrelated composite variables. K-means clustering partitions observations into non-overlapping clusters, and hierarchical clustering builds a hierarchy of clusters using bottom-up (agglomerative) or top-down (divisive) approaches.
Neural networks consist of interconnected nodes and are used for tasks involving non-linearities and complex interactions. Deep neural networks (DNNs) have multiple hidden layers and are at the forefront of artificial intelligence. Reinforcement learning involves an agent maximizing rewards over time while considering environmental constraints.
This reading emphasizes the interconnectedness between the state of the economy and financial market activity. Financial markets serve as platforms where savers and investors connect, enabling savers to postpone current consumption for future consumption, governments to raise capital for societal needs, and corporations to access funds for profitable investments. These activities contribute to economic growth and employment opportunities. Financial instruments, such as bonds and equities, represent claims on the underlying economy, highlighting the significant connection between economic decisions and the prices of these instruments.
The purpose of this reading is to identify and explain the relationship between the real economy and financial markets, and how economic analysis can be used to value individual financial market securities as well as collections of securities, such as market indexes.
The reading begins by introducing the fundamental pricing equation for all financial instruments. It then delves into the relationship between the economy and real default-free debt. The analysis further extends to examine how the economy influences the prices of nominal default-free debt, credit risky debt (such as corporate bonds), publicly traded equities, and commercial real estate.
This module explores important considerations for ETF investors, including understanding how ETFs function and trade, their tax-efficient attributes, and their key portfolio applications.
ETFs rely on a creation/redemption mechanism facilitated by authorized participants (APs), who have the exclusive ability to create or redeem new ETF shares. Moreover, these ETFs are traded on both primary and secondary markets, with end investors engaging in secondary market trading similar to stocks.
When evaluating ETF performance, it is more useful to examine holding period performance deviations (tracking differences) rather than solely focusing on the standard deviation of daily return differences (tracking error).
Tracking differences arise due to various factors, such as fees and expenses, representative sampling, index changes, regulatory and tax requirements, and fund accounting practices.
From a tax perspective, ETFs are generally treated in a manner similar to the securities they hold. They offer advantages over traditional mutual funds, as portfolio trading is typically not required when investors enter or exit an ETF. Additionally, the creation/redemption process enables ETFs to be more tax-efficient, as issuers can strategically redeem low-cost-basis securities to minimize future taxable gains. It is crucial to consider the unique ETF taxation issues prevalent in local markets.
ETF bid-ask spreads exhibit variations based on trade size and factors such as creation/redemption costs, bid-ask spreads of underlying securities, hedging or carry positions, and market makers' profit spreads. In the case of fixed-income ETFs, bid-ask spreads tend to be wider owing to the nature of dealer markets and the complexity of hedging.
Conversely, ETFs holding international stocks experience tighter bid-ask spreads when the underlying security markets are open for trading.
Premiums and discounts can occur within ETFs, representing the disparity between the exchange price of the ETF and the fund's calculated NAV based on underlying security prices. These differences can be attributed to time lags, liquidity, and supply-demand dynamics.
The costs associated with ETF ownership encompass various elements, including fund management fees, tracking error, portfolio turnover, trading costs (such as commissions and bid-ask spreads), taxable gains/losses, and security lending. The impact of these costs varies depending on the holding period, with one-time trading costs assuming greater significance for shorter-term tactical ETF traders, while management fees and turnover become more pronounced for longer-term buy-and-hold investors.
It is important to differentiate ETFs from exchange-traded notes (ETNs), as ETNs entail unique counterparty risks, and swap-based ETFs may also involve counterparty risk.
Additionally, ETF closures can give rise to unexpected tax liabilities.
In the realm of portfolio management, ETFs serve diverse purposes, including providing core asset class exposure, facilitating tactical strategies, implementing factor-based strategies, and contributing to portfolio efficiency applications like rebalancing, liquidity management, and transitions. ETFs are widely embraced by various types of investors seeking exposure to asset classes, equity style benchmarks, fixed-income categories, and commodities.
Thematic ETFs find utility in more targeted active portfolio management, while systematic strategies rely on rules-based benchmarks to access factors such as size, value, momentum, or quality. ETFs also find frequent application in multi-asset and global asset allocation strategies.
To effectively harness the potential of ETFs, investors should conduct comprehensive research and diligently evaluate factors such as the ETF's index construction methodology, costs, risks, and performance history.
This module covers multifactor models, which are essential tools for quantitative portfolio management.
These models play a crucial role in constructing portfolios, analyzing risk and return, and attributing sources of performance.
Unlike single-factor approaches, multifactor models provide a more detailed and nuanced view of risk. They describe asset returns by considering their exposure to a set of factors, including systematic factors that explain the average returns of various risky assets.
These factors represent priced risks that investors demand additional compensation for. The arbitrage pricing theory (APT) offers a framework where asset returns are linearly related to their risk with respect to these factors, making fewer assumptions compared to the Capital Asset Pricing Model (CAPM).
Multifactor models can be categorized into macroeconomic factor models, fundamental factor models, and statistical factor models, depending on the nature of the factors used.
Macroeconomic factor models focus on surprises in macroeconomic variables that significantly influence asset class returns, while fundamental factor models consider attributes of stocks or companies that explain cross-sectional differences in stock prices. Statistical factor models utilize statistical methods to identify portfolios that best explain historical returns based on either covariances or variances.
These multifactor models find applications in return attribution, risk attribution, portfolio construction, and strategic investment decisions. Factor portfolios, which have unit sensitivity to a particular factor, are useful in these models.
Active return, active risk (also known as tracking error), and information ratio (mean active return divided by active risk) are important performance metrics in evaluating multifactor models.
Additionally, multifactor models facilitate the construction of portfolios that track market indexes or alternative indexes.
In summary, multifactor models provide a comprehensive framework for quantitative portfolio management, enabling investors to gain insights into risk, construct portfolios, and make strategic investment decisions by considering multiple sources of systematic risk.
This reading discusses market risk management models and explores various techniques used to manage risk arising from market fluctuations.
The key points covered are as follows:
The concept of Value at Risk (VaR) is introduced, which estimates the minimum expected loss, either in currency units or as a percentage of portfolio value, over a certain time period and under assumed market conditions.
VaR estimation involves decomposing portfolio performance into risk factors.
Three methods of estimating VaR are discussed: the parametric method, the historical simulation method, and the Monte Carlo simulation method.
The parametric method provides a VaR estimate based on a normal distribution, considering expected returns, variances, and covariances of portfolio components. However, it may not be accurate for portfolios with non-normally distributed returns, such as those containing options.
The historical simulation method utilizes historical return data on the portfolio's holdings and allocation. It incorporates actual events but is reliant on the assumption that the future resembles the past.
The Monte Carlo simulation method requires specifying a statistical distribution of returns and generating random outcomes. It offers flexibility but can be complex and time-consuming.
It is important to note that there is no universally correct method for estimating VaR.
VaR offers advantages such as simplicity, ease of understanding and communication, capturing comprehensive information in a single measure, facilitating risk comparison across asset classes and portfolios, supporting capital allocation decisions, performance evaluation, and regulatory acceptance.
However, VaR has limitations, including subjectivity, sensitivity to discretionary choices, potential underestimation of extreme events, failure to account for liquidity and correlation risks, vulnerability to trending or volatility regimes, misconception as a worst-case scenario, oversimplification of risk, and focus on the left tail.
Variations and extensions of VaR, such as conditional VaR (CVaR), incremental VaR (IVaR), and marginal VaR (MVaR), provide additional useful information.
Conditional VaR measures the average loss conditional on exceeding the VaR cutoff.
Incremental VaR quantifies the change in portfolio VaR resulting from adding, deleting, or adjusting position sizes.
MVaR assesses the change in portfolio VaR due to small changes in positions and helps determine asset contributions to overall VaR in a diversified portfolio.
Ex ante tracking error measures the potential deviation of an investment portfolio's performance from its benchmark.
Sensitivity measures, such as beta, duration, convexity, delta, gamma, and vega, quantify how a security or portfolio reacts to changes in specific risk factors. However, they do not indicate the magnitude of potential losses.
Risk managers can employ sensitivity measures to gain a comprehensive understanding of portfolio sensitivity.
Stress tests subject a portfolio to extreme negative stress in specific exposures.
Scenario measures, including stress tests, evaluate portfolio performance under high-stress market conditions. Historical scenarios utilize past financial market history, while hypothetical scenarios model extreme movements and co-movements not yet experienced.
Reverse stress testing involves stressing a portfolio's significant exposures.
Sensitivity and scenario risk measures can complement VaR, as they do not rely solely on historical data and can overcome assumptions of normal distributions.
Constraints, such as risk budgets, position limits, scenario limits, stop-loss limits, and capital allocation, are widely used in risk management.
Risk measurements and constraints, along with leverage, risk factor exposure, accounting, and regulatory requirements, vary across market participants, such as banks, asset managers, pension funds, property and casualty insurers, and life insurers.
These risk measures help assess liquidity and asset/liability mismatch, potential losses, leverage ratios, interest rate sensitivities, economic capital, surplus at risk, asset allocation ranges, and the impact of catastrophic events on market and insurance risks.
In this reading, the candidate will explore various techniques used to enhance backtesting in the investment industry.
One such technique is rolling-window backtesting, which aims to approximate real-life investment processes and understand the risk-return trade-off of investment strategies.
The process involves specifying investment hypotheses and goals, determining strategy rules and processes, forming an investment portfolio, periodically rebalancing it, and analyzing performance and risk profiles.
Rolling-window backtesting is implemented using a rolling-window framework. Researchers calibrate factors or trade signals based on the rolling window, periodically rebalance the portfolio, and track its performance over time.
This methodology serves as a proxy for actual investing. However, it is important to consider behavioral issues such as survivorship bias and look-ahead bias when conducting backtesting.
Asset returns often exhibit characteristics such as negative skewness, excess kurtosis (fat tails), and tail dependence, which deviate from a normal distribution.
These characteristics introduce randomness and downside risk in asset returns that may not be fully captured by standard rolling-window backtesting. To address this, additional techniques can be employed.
Scenario analysis helps investors understand how an investment strategy performs in different structural regimes, accounting for potential structural breaks in financial data.
On the other hand, historical simulation, while relatively straightforward, shares similar pros and cons to rolling-window backtesting. It assumes that the historical data distribution is sufficient to represent future uncertainty and often uses bootstrapping techniques.
Monte Carlo simulation, a more sophisticated technique, requires selecting the statistical distribution of decision variables or return drivers.
The multivariate normal distribution is commonly used due to its simplicity. However, it fails to capture negative skewness and fat tails observed in factor and asset returns.
Complementing Monte Carlo simulation, sensitivity analysis explores the effects of changes in input variables on the target variable and risk profiles. This analysis helps uncover limitations in the conventional approach that assumes a multivariate normal distribution.
Another approach, the multivariate skewed t-distribution, considers skewness and kurtosis but requires estimating more parameters, which increases the likelihood of larger estimation errors.
These techniques provide investors with a more comprehensive understanding of the limitations and potential improvements in backtesting methodologies.
This reading delves into the essential concepts and principles associated with active portfolio management, which aims to enhance value compared to a benchmark portfolio by employing risk and return principles from mean-variance portfolio theory.
The following key points are discussed:
One of the key concepts is value added, which signifies the disparity between the returns of a managed portfolio and those of a passive benchmark portfolio.
Positive value added is anticipated beforehand to justify active management.
Active weights within the portfolio reflect the variations in asset weights between the managed portfolio and benchmark portfolio. Positive active weights indicate overweighting, negative active weights indicate underweighting, and the sum of active weights equals zero.
Positive value added is achieved when assets with positive active weights generate higher returns than assets with negative active weights. This correlation between active asset returns and active weights determines the value added.
Value added can originate from diverse decisions, including security selection, asset class allocation, economic sector weightings, and geographic weights.
To evaluate actively managed portfolios, two important metrics are discussed: the Sharpe ratio and the information ratio. The Sharpe ratio assesses the reward per unit of risk in absolute returns, while the information ratio evaluates the reward per unit of risk in benchmark-relative returns.
Portfolios with higher information ratios tend to have higher Sharpe ratios. The optimal level of active management that maximizes a portfolio's Sharpe ratio depends on the assumed forecasting accuracy or ex ante information coefficient.
Adjusting the active risk of a strategy and the total volatility of a portfolio are also discussed. The active risk can be adjusted by combining it with a benchmark position, and the total volatility can be adjusted by incorporating cash.
The fundamental law of active portfolio management furnishes a framework for evaluating investment strategies. It takes into account factors such as skill, portfolio structure, strategy breadth, and aggressiveness.
The fundamental law finds application in various contexts, including the selection of country equity markets and the timing of credit and duration exposures in fixed-income funds.
However, it's important to note the limitations of the fundamental law, which include uncertainties surrounding the ex ante information coefficient and the conceptual definition of strategy breadth.
The text discusses exchange rates and their complexities. It mentions that there is no simple framework for valuing exchange rates, but most economists believe in the existence of an equilibrium level that currencies gravitate towards in the long run.
The module points out the importance of forward exchange rates which play a crucial role in understanding and predicting future currency values. Unlike spot exchange rates, which apply to immediate currency trades, forward exchange rates are quoted for trades that will be settled at a specified future date. These rates are expressed in terms of points added to the spot exchange rate.
The relationship between forward exchange rates and spot exchange rates can provide valuable insights into market expectations and investor sentiment. When the forward exchange rate is higher than the spot exchange rate, it indicates a forward premium for the base currency, suggesting that market participants expect the base currency to appreciate in the future. Conversely, when the forward exchange rate is lower than the spot exchange rate, it signifies a forward discount, indicating expectations of depreciation for the base currency.
The determination of forward exchange rates is influenced by various factors, with the most significant being the interest rate differential between two currencies. Generally, higher interest rates in one country compared to another will result in a forward premium for the higher-yielding currency and a forward discount for the lower-yielding currency. The relationship between interest rate differentials and forward exchange rates reflects the concept of covered interest rate parity, which suggests that an investor can achieve the same return from investing domestically or in a foreign currency while hedging against exchange rate risk.
The time to maturity also affects forward exchange rates, as the points added to the spot rate tend to be proportional to the time remaining until the settlement date. Longer maturities typically entail higher forward points, reflecting the increased uncertainty and risk associated with longer-term currency projections.
It is important to note that while forward exchange rates can provide valuable insights, they are not infallible predictors of future spot exchange rates. Market conditions, economic factors, and unforeseen events can lead to deviations between the forward and actual exchange rates. Nonetheless, forward rates remain an essential tool for businesses, investors, and speculators to manage and hedge against future currency fluctuations.
Misalignments in exchange rates gradually build up over time and can lead to economic imbalances. Factors such as monetary and fiscal policies, current account trends, capital flows, and government intervention affect exchange rate movements.
Key points highlighted include the difference between spot and forward exchange rates, bid and offer prices quoted by market makers, bid-offer spreads, and the concept of arbitrage. The text also covers international parity conditions, such as purchasing power parity, interest rate parity, and Fisher effect, and their impact on exchange rates. It notes that these conditions rarely hold in the short term but tend to hold over longer horizons.
The relationship between monetary policy, fiscal policy, and exchange rates is discussed, including how tightening or easing monetary policy can affect currency value. The Mundell-Fleming model and the monetary model of exchange rate determination are mentioned in this context. The impact of fiscal policy on interest rates, capital flows, and trade balance is also explored.
The portfolio balance model suggests that government debt and budget deficits can influence exchange rates if investors are compensated with higher returns or currency depreciation. Capital inflows can lead to boom-like conditions and currency overvaluation, while capital controls may be used to manage exchange rates and prevent crises. The role of government policies in influencing exchange rates is emphasized, particularly in emerging markets with large foreign exchange reserves.
The text concludes by discussing factors associated with currency crises, such as capital market liberalization, foreign capital inflows, banking crises, fixed exchange rates, declining foreign exchange reserves, deviations from historical means, deteriorating terms of trade, money growth, and inflation.
Having knowledge of regulation is crucial because it can have far-reaching and significant effects, impacting both the broader economy and individual entities and securities.
When analyzing regulation, it is important to consider its origin from various sources and its application in different areas. A framework that encompasses different types of regulators, regulatory approaches, and areas of impact can be helpful in understanding the potential effects of new regulations and their implications for different entities.
It is common for multiple regulators to address specific issues, each with their own objectives and preferred regulatory tools. When developing regulations, regulators should carefully consider the costs and benefits involved. Assessing the net regulatory burden, which takes into account the private costs and benefits of regulation, is also an important aspect of regulatory analysis. However, evaluating the costs and benefits of regulation can be challenging.
Key points from the reading include the necessity of regulation to address informational frictions and externalities, the extensive regulation of securities markets and financial institutions due to the potential consequences of failures in the financial system, and the focus of regulators on areas such as prudential supervision, financial stability, market integrity, and economic growth.
Regulatory competition refers to the competition among regulatory bodies to attract specific entities through the use of regulation. Given the wide scope of regulation, it is useful to have a framework that identifies potential areas of regulation, both current and anticipated, which may impact the entity under consideration.
Regulation is typically enacted by legislative bodies, issued by regulatory bodies in the form of administrative regulations, and interpreted by courts, resulting in judicial law. The interdependence and potentially conflicting objectives among regulators are important factors to consider for regulators, regulated entities, and those assessing the effects of regulation.
Regulatory capture occurs when regulation is designed to benefit the interests of regulated entities. Regulators have responsibilities in both substantive and procedural laws, addressing the rights, responsibilities, relationships among entities, and the protection and enforcement of these laws.
Entities may engage in regulatory arbitrage, utilizing differences in economic substance, regulatory interpretation, or regulatory regimes to their advantage. Regulators have a range of tools at their disposal, including regulatory mandates, behavior restrictions, provision of public goods, and public financing of private projects. It is important to choose regulatory tools that maintain a stable regulatory environment, characterized by predictability, effectiveness, time consistency, and enforceability.
To assess regulation and its outcomes, regulators should conduct ongoing cost-benefit analyses, develop improved measurement techniques for regulatory outcomes, and apply economic principles as guiding factors. Analysts should also consider the net regulatory burden on the entity of interest as a crucial aspect to evaluate.
Intercompany investments have a significant impact on business operations and pose challenges for analysts when assessing company performance. There are five main forms of investments in other companies: investments in financial assets, investments in associates, joint ventures, business combinations, and investments in special purpose and variable interest entities.
Investments in financial assets refer to investments where the investor has no significant influence. They can be measured and reported based on fair value through profit or loss, fair value through other comprehensive income, or amortized cost. Both IFRS and US GAAP treat investments in financial assets similarly.
Investments in associates and joint ventures involve situations where the investor has significant influence but not control over the investee's business activities. Both IFRS and US GAAP require the equity method of accounting for these investments. Under the equity method, income is recognized as earned rather than when dividends are received. The equity investment is carried at cost, adjusted for post-acquisition income and dividends. It is reported as a single line item on the balance sheet and income statement.
Business combinations are accounted for using the acquisition method under both IFRS and US GAAP. The fair value of the consideration given is used to measure the identifiable assets and liabilities acquired in the combination. Goodwill, representing the difference between the acquisition value and the fair value of the target's net assets, is not amortized. Instead, it is evaluated for impairment at least annually, with impairment losses reported on the income statement. IFRS and US GAAP use different approaches to determine and measure impairment losses.
When the acquiring company owns less than 100% of the target company, the non-controlling (minority) shareholders' interests are reported on the consolidated financial statements. IFRS allows the non-controlling interest to be measured at fair value (full goodwill) or the proportionate share of the acquiree's net assets (partial goodwill). US GAAP requires the non-controlling interest to be measured at fair value (full goodwill).
Consolidated financial statements are prepared in each reporting period to present the combined financial information of the parent company and its subsidiaries.
Special purpose entities (SPEs) and variable interest entities (VIEs) must be consolidated by the entity expected to absorb the majority of expected losses or receive the majority of expected residual benefits.
The reading delves into two types of employee compensation: post-employment benefits and share-based compensation. Despite their differences, both forms go beyond salaries and pose complex challenges in terms of valuation, accounting, and reporting.
Although there is a convergence between IFRS and US GAAP accounting standards, variations in social systems, laws, and regulations across countries can result in discrepancies in pension and share-based compensation plans, impacting a company's financial reports and earnings.
Defined contribution pension plans establish the contribution amount, with the eventual pension benefit depending on the value of plan assets at retirement. These plans do not generate liabilities, making balance sheet reporting less significant. Conversely, defined benefit pension plans specify the pension benefit based on factors like tenure and final salary. Such plans are funded by the company through a separate pension trust, with funding requirements differing across countries.
Both IFRS and US GAAP mandate reporting a pension liability or asset on the balance sheet, calculated as the projected benefit obligation minus the fair value of plan assets. However, there are limitations on the amount of pension assets that can be reported.
Under IFRS, components of periodic pension cost are recognized in the profit and loss (P&L) statement. Under US GAAP, these components include current service costs and interest expenses in the P&L, while other components are recognized in other comprehensive income (OCI) and gradually amortized to future P&L.
Estimating future obligations for defined benefit pension plans and post-employment benefits relies on various assumptions, such as discount rates, salary increases, expected returns on plan assets, and healthcare cost inflation.
These assumptions significantly impact the estimated obligations.
Employee compensation packages serve different purposes, such as providing liquidity, retaining employees, and offering incentives. Salary, bonuses, and share-based compensation are common elements.
Share-based compensation, including stocks and stock options, aligns employees' interests with those of shareholders without necessitating immediate cash outflows.
Both IFRS and US GAAP require reporting the fair value of share-based compensation, and the chosen valuation technique or option pricing model is disclosed.
Assumptions such as exercise price, stock price volatility, award duration, forfeited options, dividend yield, and risk-free interest rate influence the estimated fair value and compensation expense, with subjective assumptions potentially exerting a significant impact.
The translation of foreign currency amounts is a significant accounting issue for multinational companies. Fluctuations in foreign exchange rates result in changes in the functional currency values of foreign currency assets, liabilities, and subsidiaries over time. These changes give rise to foreign exchange differences that must be reflected in a company's financial statements. The main accounting issues in managing multinational operations are how to measure these foreign exchange differences and whether to include them in the calculation of net income.
The local currency is the national currency of the entity's location, while the functional currency is the currency of the primary economic environment where the entity operates. Usually, the local currency is also the functional currency. Any currency other than the functional currency is considered a foreign currency for accounting purposes. The presentation currency is the currency in which financial statement amounts are presented, often being the same as the local currency.
When a sale or purchase is denominated in a foreign currency, the revenue or inventory and the foreign currency accounts receivable or accounts payable are translated into the seller's or buyer's functional currency using the exchange rate on the transaction date. Any changes in the functional currency value of the foreign currency accounts receivable or accounts payable between the transaction date and the settlement date are recognized as foreign currency transaction gains or losses in net income.
If a balance sheet date falls between the transaction date and the settlement date, the foreign currency accounts receivable or accounts payable are translated at the exchange rate on the balance sheet date. The change in the functional currency value of these accounts is recognized as a foreign currency transaction gain or loss in income. It's important to note that these gains and losses are unrealized at the time of recognition and may or may not be realized when the transactions are settled.
Foreign currency transaction gains occur when an entity has a foreign currency receivable and the foreign currency strengthens or has a foreign currency payable and the foreign currency weakens. Conversely, foreign currency transaction losses arise when an entity has a foreign currency receivable and the foreign currency weakens or has a foreign currency payable and the foreign currency strengthens.
Companies are required to disclose the net foreign currency gain or loss included in income. They can choose to report foreign currency transaction gains and losses as part of operating income or as a separate component of non-operating income. Differences in reporting methods can affect the comparability of operating profit and operating profit margin between companies.
In preparing consolidated financial statements, the foreign currency financial statements of foreign operations need to be translated into the presentation currency of the parent company.
Two different translation methods are commonly used.
Under the current rate method, assets, liabilities, and income items are translated at different exchange rates, while under the temporal method, different exchange rates are used for monetary and non-monetary items.
The choice of translation method and the recognition of translation adjustments as income or as a separate component of equity depend on the foreign operation's functional currency.
Disclosure requirements include reporting the total amount of translation gain or loss in income and the amount of translation adjustment in stockholders' equity. There is no requirement to separately disclose gains or losses from foreign currency transactions and gains or losses from the application of the temporal method.
It's important to understand the impact of foreign currency translation on a company's financial results, regardless of whether the financial statements are prepared in accordance with International Financial Reporting Standards (IFRS) or US Generally Accepted Accounting Principles (GAAP).
While there are no major differences in foreign currency translation rules between IFRS and US GAAP, treatment of foreign operations in highly inflationary countries may vary.
Analysts can gather information about the tax impact of multinational operations from companies' disclosures on effective tax rates.
Changes in exchange rates between the reporting currency and the currency in which sales are made can affect a multinational company's sales growth, but growth resulting from changes in volume or price is generally considered more sustainable than growth from currency exchange rate moves.
Financial institutions, due to their systemic importance, are subject to heavy regulation to mitigate systemic risk.
The recent failures of SVB and First Republic Bank illustrate the importance of monitoring these large banks, assessing their financial health and preventing such events from happening
The Basel Committee, comprising representatives from central banks and bank supervisors worldwide, establishes international regulatory standards for banks, including minimum capital, liquidity, and stable funding requirements.
Financial stability is a key focus of international organizations such as the Financial Stability Board, International Association of Insurance Supervisors, International Association of Deposit Insurers, and International Organization of Securities Commissions.
Unlike manufacturing or merchandising companies, financial institutions primarily hold financial assets, exposing them to credit risk, liquidity risk, market risk, and interest rate risk. Their asset values closely align with fair market values.
The CAMELS approach is widely used to analyze banks, considering Capital adequacy, Asset quality, Management capabilities, Earnings sufficiency, Liquidity position, and Sensitivity to market risk. Capital adequacy ensures the bank can absorb losses without severe financial damage, while asset quality and risk management play a crucial role. Management capabilities, earnings, liquidity, and market risk sensitivity are also evaluated.
Other important attributes in analyzing banks include government support, corporate culture, competitive environment, off-balance-sheet items, segment information, currency exposure, and risk disclosures. Insurance companies can be categorized as property and casualty (P&C) or life and health (L&H). They generate revenue from premiums and investment income. P&C policies have shorter terms and more variable claims, while L&H policies are longer-term and more predictable.
In analyzing insurance companies, key areas to consider include their business profile, earnings characteristics, investment returns, liquidity, and capitalization. Profitability analysis for P&C companies includes evaluating loss reserves and the combined ratio.
Assessing the quality of financial reports is a crucial analytical skill that involves the evaluation of reporting quality and results quality.
The quality of financial reporting can vary, ranging from high to low, and can be influenced by various factors such as revenue and expense recognition, classification on the statement of cash flows, and the recognition, classification, and measurement of assets and liabilities on the balance sheet.
To evaluate financial reporting quality, several steps are typically followed.
These steps include gaining an understanding of the company's business and industry, comparing financial statements across different periods, assessing accounting policies, conducting financial ratio analysis, examining the statement of cash flows, reviewing risk disclosures, and analyzing management compensation and insider transactions. Earnings of high quality, assuming high reporting quality, have a greater positive impact on a company's value compared to earnings of low quality.
Indicators used to assess earnings quality encompass recurring earnings, earnings persistence, outperforming benchmarks, and identifying poor-quality earnings through enforcement actions and restatements.
Earnings with a significant accrual component tend to be less persistent, and companies consistently meeting or narrowly beating benchmarks may raise concerns about the quality of their earnings. Accounting misconduct often involves problems with revenue recognition or misrepresentation of expenses.
Financial results quality can be evaluated through bankruptcy prediction models, which measure the likelihood of default or bankruptcy. Similarly, high-quality reported cash flows indicate satisfactory economic performance and reliable reporting, while low-quality cash flows may reflect genuine underperformance or distorted representation of economic reality.
Regarding the balance sheet, financial reporting quality is indicated by completeness, unbiased measurement, and clear presentation.
The presence of off-balance-sheet debt compromises completeness, and unbiased measurement is particularly critical when subjective valuation of assets and liabilities is involved.
Financial statements can provide insights into financial or operational risks, and the management commentary (MD&A) can furnish valuable information about risk exposures and risk management strategies. Required disclosures, such as changes in senior management or delays in financial reporting, can serve as warning signs of potential issues with financial reporting quality.
The financial press can also bring to light previously unidentified reporting problems, necessitating further investigation by analysts.
Real estate investments encompass different forms, including direct ownership (private equity), indirect ownership through publicly traded equity, direct mortgage lending (private debt), and securitized mortgages (publicly traded debt). Investing in real estate income property offers various motivations, such as current income, price appreciation, inflation hedge, diversification, and tax benefits.
Incorporating equity real estate investments into a traditional portfolio can bring diversification benefits due to their imperfect correlation with stocks and bonds. Equity real estate investments may also serve as a hedge against inflation if the income stream can be adjusted accordingly and real estate prices rise with inflation. Debt investors in real estate primarily rely on promised cash flows and typically do not participate in the property's value appreciation, similar to fixed-income investments like bonds.
The performance of real estate investments is influenced by the underlying property's value, with location playing a crucial role in determining its worth. Real estate possesses distinctive characteristics compared to other asset classes, including its heterogeneity and fixed location, high unit value, management intensiveness, high transaction costs, depreciation, sensitivity to the credit market, illiquidity, and challenges in determining its value and price.
A wide range of real estate properties is available for investment, with the primary commercial categories being office, industrial and warehouse, retail, and multi-family. Each property type exhibits varying susceptibilities to risk factors, such as business conditions, lead time for development, supply and demand dynamics, capital availability and cost, unexpected inflation, demographics, liquidity, environmental considerations, information availability, management expertise, and leverage.
The value of each property type is influenced by factors such as its location, lease structures, and economic indicators like economic growth, population growth, employment trends, and consumer spending patterns.
Real estate appraisers utilize three primary valuation approaches: income, cost, and sales comparison. The income approach involves methods like direct capitalization and discounted cash flow, which consider net operating income and growth expectations to determine property value.
The cost approach estimates value based on adjusted replacement cost and is employed for properties with limited market comparables. The sales comparison approach determines property value by examining the prices of comparable properties in the current market. When purchasing real estate with debt financing, additional ratios and returns such as the loan-to-value ratio, debt service coverage ratio, and leveraged and unleveraged internal rates of return are considered by debt and equity investors.
Publicly traded real estate securities encompass various types, including real estate investment trusts (REITs), real estate operating companies (REOCs), and residential and commercial mortgage-backed securities (RMBS and CMBS).
REITs, in particular, offer higher yields and income stability compared to other shares. Their valuation can be approached through net asset value due to active private markets for their real estate assets. REITs provide tax exemptions but have less flexibility in real estate activities and reinvesting operating cash flows. Factors considered when assessing REIT investments include economic trends, retail sales, job creation, population growth, supply and demand dynamics, leasing activity, financial health of tenants, leverage, and management quality.
Adjustments are made to financial statements to obtain accurate income and net worth figures.
Valuation methods such as funds from operations, adjusted funds from operations, dividend discount models, and discounted cash flow models are commonly used.
Private equity funds aim to generate value through various methods, including optimizing financial structures, incentivizing management, and implementing operational improvements.
Unlike publicly traded companies, private equity consolidates ownership and control, which is seen as a fundamental driver of returns for top-performing funds. Valuing potential investments poses challenges due to their private nature, requiring different techniques based on the nature of the investment.
Debt financing availability significantly impacts the scale of private equity activity and influences market valuations. Private equity funds operate as "buy-to-sell" investors, focusing on acquiring, adding value, and strategically exiting investments within the fund's lifespan.
Proper exit planning plays a critical role in realizing value. Continuously valuing the investment portfolio poses challenges as market values are not readily observable, necessitating subjective judgment.
The primary performance metrics for private equity funds are internal rate of return (IRR) and multiples, but comparing returns across funds and asset classes requires careful consideration of factors such as cash flow timing, risk differences, portfolio composition, and vintage-year effects.
Fundamental analysis relies on industry and company analysis, which involve various approaches to forecasting income and expenses. These approaches include top-down, bottom-up, and hybrid methods.
Top-down approaches start at the macroeconomic level, while bottom-up approaches focus on individual companies or business segments. Hybrid approaches combine elements of both.
When it comes to revenue forecasting, analysts can use different techniques. One approach involves forecasting the growth rate of nominal GDP and the industry/company's growth relative to GDP growth. Another method combines forecasts of market growth with predictions of the company's market share in specific markets.
Positive correlations between operating margins and sales indicate economies of scale within an industry. Certain items on the balance sheet, such as retained earnings, are directly influenced by the income statement, while accounts receivable, accounts payable, and inventory closely relate to income statement projections. Efficiency ratios are commonly utilized to model working capital accounts.
Return on invested capital (ROIC) is a profitability metric that considers taxes and compares net operating profit to the difference between operating assets and liabilities. Sustained high levels of ROIC often suggest a competitive advantage.
Competitive factors affect a company's ability to negotiate input prices with suppliers and adjust product or service prices. Porter's five forces framework helps identify these factors. Inflation or deflation impacts pricing strategies, taking into account industry structure, competitive forces, and consumer demand characteristics.
When a new technological development introduces a product that may impact the demand for existing products, analysts can estimate the effect by combining a unit forecast for the new product with an expected cannibalization factor.
The choice of the explicit forecast horizon depends on factors such as the projected holding period, portfolio turnover, industry cyclicality, company-specific considerations, and employer preferences.
Analyst forecasts can be influenced by behavioral biases like overconfidence, the illusion of control, conservatism, representativeness, and confirmation bias. These biases can have an impact on the accuracy of the forecasts.