The CFA Level II exam consists of two sets of 11 item-sets for each session (one in the morning and the second in the afternoon), each consisting of 44 multiple-choice questions, for a total of 88 questions and an average of about 4 questions per item-set.
The CFA Level II exam lasts 4 hours and 24 minutes, divided into two equal sessions of 2 hours and 12 minutes, one in the morning and the second in the afternoon with an optional break in between.
For more information:
https://www.cfainstitute.org/en/programs/cfa/exam/level-ii
ETHICAL AND PROFESSIONAL STANDARDS | 10-15% |
QUANTITATIVE METHODS | 5-10% |
ECONOMICS | 5-10% |
FINANCIAL STATEMENT ANALYSIS | 10-15% |
CORPORATE ISSUERS | 5-10% |
EQUITY INVESTMENTS | 10-15% |
FIXED INCOME | 10-15% |
DERIVATIVES | 5-10% |
ALTERNATIVE INVESTMENTS | 5-10% |
PORTFOLIO MANAGEMENT | 10-15% |
This reading focuses on the principles and tools used to value fixed-income securities through arbitrage. It introduces the binomial interest rate tree as a valuable tool for valuing both option-free bonds and bonds with embedded options.
The main points covered include the fundamental principle of valuation, the composition of fixed-income securities as portfolios of zero-coupon bonds, the elimination of arbitrage opportunities in well-functioning markets, the arbitrage-free approach for valuing bonds, and the use of the binomial interest rate tree to represent and value interest rate fluctuations.
The reading also discusses backward induction valuation, pathwise valuation, and the Monte Carlo method as valuation methodologies.
Additionally, it explores term structure models for explaining yield curve shape and the distinction between arbitrage-free models and equilibrium models in valuing bonds with embedded options.
This reading introduces the principles and techniques of arbitrage valuation for fixed-income securities, with a specific focus on the binomial interest rate tree. The key points discussed include the following:
First and foremost, the valuation of financial assets is based on the fundamental principle that their value is determined by the present value of expected future cash flows.
Moreover, a fixed-income security is a combination of zero-coupon bonds, each having its own discount rate, which depends on the shape of the yield curve and the timing of cash flows.
In well-functioning markets, prices adjust to eliminate any opportunities for arbitrage, which refers to transactions resulting in riskless profits without any cash outlay.
To implement the arbitrage-free approach, it involves considering a security as a portfolio of zero-coupon bonds and valuing them accordingly based on their maturity and coupon rates.
For bonds without embedded options, the arbitrage-free value is simply the present value of expected future cash flows, calculated using benchmark spot rates.
Furthermore, the binomial interest rate tree provides a framework where the short interest rate can take on two possible values, following assumptions about volatility and a lognormal random walk interest rate model.
An interest rate tree is a graphical representation of the potential interest rate values (forward rates), based on an interest rate model and assumptions about interest rate volatility.
The interest rates in subsequent periods are determined by three main assumptions: an interest rate model governing the random process of interest rates, the assumed level of interest rate volatility, and the current benchmark yield curve.
Within the interest rate tree, adjacent interest rates are multiples of e raised to the power of 2σ, derived from the lognormal distribution. As interest rates approach zero, the absolute change between rates becomes smaller.
To determine the value of a bond at a specific node, the valuation methodology known as backward induction is used, starting from maturity and working back from right to left.
It's important to calibrate the interest rate tree to match the current yield curve by selecting interest rates that yield the benchmark bond value, ensuring an arbitrage-free valuation.
In addition, option-free bonds, when valued using the binomial interest rate tree, should yield the same value as when discounted using spot rates.
Pathwise valuation calculates the present value of a bond for each possible interest rate path and then takes the average value across all paths.
Alternatively, the Monte Carlo method offers an alternative approach for simulating a large number of potential interest rate paths to understand how the value of a security is affected. It involves randomly selecting paths to approximate the results of a complete pathwise valuation.
Moreover, term structure models, including general equilibrium and arbitrage-free models, aim to explain the shape of the yield curve and are used for valuing bonds and bond-related derivatives.
It's worth noting that arbitrage-free models are commonly used to value bonds with embedded options.
Unlike equilibrium models, they start with observed market prices of a reference set of financial instruments, assuming that this reference set is correctly priced.
Embedded options are rights attached to a bond that can be exercised by the issuer, bondholder, or automatically based on interest rate movements. These options can be simple, such as call, put, and extension options, or complex, involving combinations of options.
Valuing bonds with embedded options requires considering the arbitrage-free values of the straight bond and each embedded option. The value of a callable bond decreases due to the issuer's call option, while a putable bond's value increases due to the bondholder's put option. Interest rate volatility affects embedded options, and a binomial interest rate tree is used to model volatility. Valuing bonds with embedded options involves generating an interest rate tree, determining option exercise at each node, and using backward induction for valuation.
Option-adjusted spread (OAS) and effective duration are important measures for assessing price sensitivity. Convertible bonds have unique features, such as conversion price and ratio, and their valuation requires consideration of bond, stock, and option characteristics. The risk-return profile of convertible bonds depends on the underlying share price relative to the conversion price. Valuation of convertible bonds follows the arbitrage-free framework, and each component can be valued separately.
In the context of credit analysis, several important topics are discussed in this reading. These key points can be summarized as follows:
One aspect of modeling credit risk involves considering three crucial factors: expected exposure to default, recovery rate, and loss given default.
To assess the credit risk of a bond accurately, the credit valuation adjustment (CVA) is calculated. It involves summing the present values of expected losses over the remaining life of the bond. Risk-neutral probabilities are used, and discounting is done at risk-free rates.
The CVA serves as compensation for bearing default risk and can be expressed as a credit spread.
Credit scores and ratings are evaluations provided by third parties that assess the creditworthiness of individuals or entities. These assessments are used in different markets.
Credit analysts utilize credit ratings and transition probabilities to adjust a bond's yield, reflecting the probabilities of credit migration. It is important to note that credit spread migration typically leads to a reduction in expected return.
Credit analysis models fall into two broad categories: structural models and reduced-form models. Structural models consider stakeholders' positions from an option perspective, while reduced-form models focus on predicting the timing of defaults using observable variables.
When volatile interest rates are assumed, an arbitrage-free valuation framework can be employed to estimate the credit risk associated with a bond.
For floating-rate notes, the discount margin is similar to the credit spread observed in fixed-coupon bonds. This discount margin can be calculated using an arbitrage-free valuation framework.
Arbitrage-free valuation methods are useful for assessing the sensitivity of the credit spread to changes in credit risk parameters.
The term structure of credit spreads is influenced by macro and micro factors. Weak economic conditions tend to result in a steeper and wider credit spread curve. The shape of the credit spread curve is also influenced by market dynamics, such as supply and demand, and frequently traded securities.
Issuer- or industry-specific factors, such as events that may decrease leverage in the future, can impact the shape of the credit spread curve, causing it to flatten or invert.
Bonds that are at a high risk of default often trade close to their recovery value at different maturities. Consequently, the credit spread curve provides less information about the relationship between credit risk and maturity in such cases.
When analyzing securitized debt, credit analysts consider various factors. These include asset concentration, the similarity or heterogeneity of assets in terms of credit risk, and other relevant characteristics to make informed investment decisions.
This module focuses on credit default swaps (CDS).
A credit default swap (CDS) is a contractual arrangement between two parties, whereby one party seeks protection against potential losses resulting from the default of a borrower within a specified timeframe. Key aspects of CDS include the following:
A CDS is based on the debt of a third party, known as the reference entity. Specifically, it involves a senior unsecured bond referred to as the reference obligation.
Moreover, a CDS typically provides coverage for all obligations of the reference entity that have equal or higher seniority.
The two parties involved in a CDS are the credit protection buyer and the credit protection seller. Specifically, the buyer is said to be short the credit of the reference entity, while the seller is said to be long the credit of the reference entity.
Furthermore, a credit event triggers the payment in a CDS. These credit events can include bankruptcy, failure to pay, and, in some cases, involuntary restructuring.
Settlement of a CDS can occur through either a cash payment or physical delivery. In the case of cash settlement, it is determined by either the cheapest-to-deliver obligation or an auction of the reference entity's debt.
In terms of valuation, a CDS involves estimating the present value of the payment leg (payments made from the protection buyer to the protection seller) and the protection leg (payment from the protection seller to the protection buyer in the event of default).
The comparison of these values determines whether an upfront premium is paid by the buyer or the seller.
The hazard rate, representing the probability of default given no prior default, plays a significant role in determining the value of expected payments in a CDS.
Moreover, CDS prices are often quoted in terms of credit spreads, which indicate the compensation received by the protection seller from the protection buyer.
These credit spreads are commonly expressed through a credit curve, which shows the relationship between credit spreads for bonds of different maturities from the same borrower.
The value of a CDS changes over time as the credit quality of the reference entity shifts, resulting in gains or losses for the parties involved, even if default has not occurred.
As the maturity of the CDS approaches, its spreads tend to approach zero.
Accumulated gains or losses in a CDS can be monetized by entering into an offsetting position that matches the terms of the original contract.
Furthermore, CDS are utilized to adjust credit exposures and capitalize on differing assessments of credit costs associated with various instruments tied to the reference entity, such as debt, equity, and derivatives.
This reading provides a foundation for understanding the pricing and valuation of forwards, futures, and swaps. It explores key points by discussing the rules followed by arbitrageurs, the no-arbitrage approach used for pricing and valuation, and the assumptions made in this context.
Carry arbitrage models are employed, and a distinction is made between pricing and valuation. The forward contract price is determined to achieve a
value of zero, and similar principles apply to futures contracts.
The value of a forward commitment depends on various factors. Forward rate agreements (FRAs) are introduced as forward contracts on interest rates.
Equities and fixed-income securities have specific pricing considerations. Swaps are priced and valued using replicating or offsetting portfolios, and the value of an interest rate swap is calculated based on fixed swap rates.
Currency swaps and equity swaps follow similar approaches. By understanding these concepts, readers can make informed decisions in financial markets regarding the pricing and valuation of forward commitments.
This module serves as a fundamental guide to understanding the valuation of contingent claims, specifically focusing on the valuation of different options.
It covers key points including the adherence of arbitrageurs to essential rules of avoiding personal funds and price risk. The valuation process follows the no-arbitrage approach, which is based on the concept of the law of one price.
The reading assumes certain conditions, such as the availability of identifiable and investable replicating instruments, the absence of market frictions, permission for short selling and borrowing/lending at a known risk-free rate, and a known distribution for the underlying instrument's price.
The reading introduces the two-period binomial model and its relationship to three one-period binomial models positioned at different times. It discusses valuation approaches for European-style and American-style options, highlighting the use of the expectations approach for European-style options and the no-arbitrage approach for both types.
American-style options are influenced by early exercise when working backward through the binomial tree. Interest rate options are valued using a modified Black futures option model, considering factors like the underlying being a forward rate agreement (FRA), accrual period adjustments, and day-count conventions.
The Black-Scholes-Merton (BSM) option valuation model assumes the underlying instrument follows geometric Brownian motion, resulting in a lognormal distribution of prices. The BSM model is interpreted as a dynamically managed portfolio consisting of the underlying instrument and zero-coupon bonds.
It explains the significance of N(d1) and N(d2) in the BSM model, relating them to replication of options, delta determination, risk-neutral probability estimation, and the impact on option values.
The Black futures option model is applied when the underlying is a futures or forward contract. Valuation of interest rate options is achieved using an adjusted version of this model, incorporating FRAs and considering day-count conventions and underlying notional amounts.
The reading introduces interest rate caps and floors, which are portfolios of interest rate call and put options with sequential maturities and the same exercise rate. Swaptions, options on swaps, are also discussed, distinguishing between payer and receiver swaptions.
Risk measures, including delta, gamma, theta, vega, and rho, are defined and their roles in option valuation are explained. Delta hedging and the concept of a delta-neutral portfolio are discussed. The estimation of option price changes using delta approximation and the error estimation using gamma are explored. The reading emphasizes the importance of gamma in capturing non-linearity risk and the use of a delta-plus-gamma approximation for better price estimation.
Furthermore, the reading touches upon theta as a measure of option value change over time, vega as the sensitivity to volatility changes, and rho as the sensitivity to changes in the risk-free interest rate.
It acknowledges that historical volatility can be estimated but lacks predictability for future volatility. Implied volatility is defined as the BSM model volatility that corresponds to the market option price, reflecting market participants' beliefs about future underlying volatility.
The reading introduces the concepts of the volatility smile and volatility surface. The volatility smile represents the implied volatility plotted against the exercise price, while the volatility surface expands it to a three-dimensional plot, incorporating expiration time as an additional dimension.
It highlights that the observed volatility surface deviates from the expectation of a flat surface predicted by the BSM model assumptions.
Multiple linear regression is a statistical technique utilized to model the linear relationship between a dependent variable and two or more independent variables. It finds practical application in explaining financial variables, testing theories, and making forecasts.
The process of conducting multiple regression involves several decision points, including identifying the dependent and independent variables, selecting an appropriate regression model, testing underlying assumptions, evaluating goodness of fit, and making necessary adjustments.
In a multiple regression model, the equation represents the relationship as Yi = b0 + b1X1i + b2X2i + b3X3i + ... + bkXki + εi, where Y is the dependent variable and X1 to Xk are the independent variables. The model is estimated using n observations, where i ranges from 1 to n.
The intercept coefficient, denoted as b0, represents the expected value of Y when all independent variables are set to zero. On the other hand, the slope coefficients (b1 to bk) are also known as partial regression coefficients, describing the impact of each independent variable (Xj) on Y while holding other independent variables constant.
To ensure the validity of multiple regression models, several assumptions must be met.
These assumptions include: (1) linearity of the relationship between the dependent and independent variables, (2) homoskedasticity (equal variance of errors) across all observations, (3) independence of errors, (4) normality of error distribution, and (5) independence of the independent variables.
Diagnostic plots can aid in assessing whether these assumptions hold true.
Scatterplots of the dependent variable against the independent variables help identify non-linear relationships, while residual plots assist in detecting violations of homoskedasticity and independence of errors.
In multiple regression analysis, the adjusted R2 is utilized as a measure of how well the model fits the data, as it accounts for the number of independent variables included in the model.
Unlike the regular R2, the adjusted R2 does not automatically increase as more independent variables are added.
The adjusted R2 will increase or decrease when a variable is added to the model, depending on whether its coefficient's absolute value of the t-statistic is greater or less than 1.0.
To assess and select the "best" model among a group with the same dependent variable, other criteria like Akaike's information criterion (AIC) and Schwarz's Bayesian information criteria (BIC) are commonly used. AIC is preferred when the goal is prediction, while BIC is favored for evaluating goodness of fit. In both cases, lower values indicate better model performance.
When it comes to hypothesis testing of individual coefficients in multiple regression, the procedures using t-tests remain the same as in simple regression.
However, for jointly testing a subset of variables, the joint F-test is employed. This test compares a "restricted" model, which includes a narrower set of independent variables nested within the broader "unrestricted" model.
The null hypothesis states that the slope coefficients of all independent variables excluded from the restricted model are zero. The general linear F-test expands on this concept, examining the null hypothesis that all slope coefficients in the unrestricted model are equal to zero.
Predicting the value of the dependent variable using a multiple regression model follows a similar process to simple regression. The estimated slope coefficients, multiplied by the assumed values of the independent variables, are summed, and the estimated intercept coefficient is added.
In multiple regression, the confidence interval around the forecasted value of the dependent variable accounts for both model error and sampling error, which arises from forecasting the independent variables. The larger the sampling error, the wider the standard error of the forecast of Y, resulting in a wider confidence interval.
Principles for proper regression model specification encompass several aspects, including the economic reasoning behind variable choices, the concept of parsimony, achieving good out-of-sample performance, selecting an appropriate model functional form, and ensuring no violations of regression assumptions.
Instances of failures in regression functional form often arise from different factors, such as omitted variables, inappropriate forms of variables, improper variable scaling, and unsuitable data pooling. These failures can lead to violations of regression assumptions.
Heteroskedasticity occurs when the variance of regression errors varies across observations. Unconditional heteroskedasticity refers to situations where the error variance is not correlated with the independent variables, while conditional heteroskedasticity emerges when the error variance correlates with the values of the independent variables.
Unconditional heteroskedasticity does not pose significant issues for statistical inference. However, conditional heteroskedasticity is problematic as it leads to an underestimation of the standard errors of regression coefficients, resulting in inflated t-statistics and a higher likelihood of Type I errors.
Detecting conditional heteroskedasticity can be accomplished using the Breusch-Pagan (BP) test, and the bias it introduces into the regression model can be rectified by computing robust standard errors.
Serial correlation, also known as autocorrelation, occurs when regression errors are correlated across observations. It can be a significant problem in time-series regressions, leading to inconsistent coefficient estimates and underestimation of standard errors, which in turn inflates t-statistics (similar to conditional heteroskedasticity).
The Breusch-Godfrey (BG) test provides a robust method for detecting serial correlation. This test utilizes residuals from the original regression as the dependent variable, running them against initial regressors plus lagged residuals. The null hypothesis (H0) in this case is that the coefficients of the lagged residuals are zero.
Biased estimates of standard errors caused by serial correlation can be rectified using robust standard errors, which also account for conditional heteroskedasticity.
Multicollinearity occurs when there are high pairwise correlations between independent variables or when three or more independent variables form approximate linear combinations that are highly correlated. Multicollinearity leads to inflated standard errors and reduced t-statistics.
The variance inflation factor (VIF) serves as a measure for quantifying multicollinearity. A VIF value of 1 for a specific independent variable (Xj) indicates no correlation with the other regressors. If VIFj exceeds 5, further investigation is warranted, and a VIFj value exceeding 10 indicates serious multicollinearity requiring correction.
Potential solutions to address multicollinearity include dropping one or more of the regression variables, using alternative proxies for certain variables, or increasing the sample size.
Regression results can be influenced by two types of observations: high-leverage points, characterized by extreme values of independent variables, and outliers, characterized by extreme values of the dependent variable.
To identify high-leverage points, leverage is used as a measure. If the leverage exceeds 3 times the average leverage (3/n), where n is the number of observations, then the observation is considered potentially influential. On the other hand, outliers can be identified using studentized residuals. If the studentized residual is greater than the critical value of the t-statistic with n - k - 2 degrees of freedom, the observation is potentially influential.
Cook's distance, also known as Cook's D (Di), is a metric that quantifies the impact of individual data points on the regression results. It measures how much the estimated regression values change if a specific observation (i) is removed. If Di is greater than 2 times the square root of k/n, where k is the number of independent variables and n is the number of observations, then the observation is highly likely to be influential. An influence plot can be used to visually analyze the leverage, studentized residuals, and Cook's D for each observation.
Dummy variables, also called indicator variables, are utilized to represent qualitative independent variables. They take a value of 1 to indicate the presence of a specific condition and 0 otherwise. When including n possible categories, the regression model must incorporate n - 1 dummy variables.
Intercept dummy variables modify the original intercept based on specific conditions.
When the intercept dummy is 1, the regression line shifts up or down parallel to the base regression line. Similarly, slope dummy variables allow for a changing slope under specific conditions. When the slope dummy is 1, the slope changes according to (dj + bj) × Xj, where dj represents the coefficient of the dummy variable and bj is the slope of Xj in the original regression line.
Logistic regression models are employed when the dependent variable is qualitative or categorical. They are commonly used in binary classification problems encountered in machine learning and neural networks. In logistic regression, the event probability (P) undergoes a logistic transformation into the log odds, ln[P/(1 − P)]. This transformation linearizes the relationship between the transformed dependent variable and the independent variables.
Logistic regression coefficients are typically estimated using the maximum likelihood estimation (MLE) method. The slope coefficients are interpreted as the change in the log odds of the event occurring per unit change in the independent variable, while holding all other independent variables constant.
The reading introduces various aspects of time series modeling and forecasting. In a linear trend model, the predicted trend value of a time series in period t follows the equation bˆ0 + bˆ1t, while in a log-linear trend model, it is e bˆ0+bˆ1t. Different types of trend models are suitable for time series with constant growth in amount or constant growth rate.
Trend models may not fully capture the behavior of a time series, as indicated by serial correlation of the error term. If the Durbin-Watson statistic significantly differs from 2, suggesting serial correlation, an alternative model should be considered.
Autoregressive models (AR) use lagged values to predict the current value of a time series. A time series is considered covariance stationary if its expected value, variance, and covariance remain constant and finite over time. Nonstationary time series may exhibit trends or non-constant variance. Linear regression is valid for estimating autoregressive models only if the time series is covariance stationary.
For a specific autoregressive model to fit well, the autocorrelations of the error term should be zero at all lags. Mean-reverting time series tend to fall when above their long-run mean and rise when below it. Covariance stationary time series exhibit mean reversion.
Forecasts in autoregressive models can be made for future periods based on past values. Out-of-sample forecasts are more valuable for evaluating forecasting performance than in-sample forecasts. The root mean squared error (RMSE) is a criterion for comparing forecast accuracy.
The coefficients in time series models can be unstable across different sample periods, so selecting a stationary sample period is important. A random walk is a nonstationary time series where the value in one period is the previous value plus a random error. A random walk with drift includes a nonzero intercept. Random walks have unit roots and are not covariance stationary.
Transforming a time series through differencing can sometimes make it covariance stationary. Moving averages use past values of a time series to calculate its current value, while moving-average models (MA) use lagged error terms for prediction. The order of an MA model can be determined by examining the autocorrelations.
Autoregressive and moving-average time series exhibit different patterns in autocorrelations. Seasonality can be modeled by including seasonal lags in the model. ARMA models have limitations, including unstable parameters and difficulty in determining the AR and MA order.
Autoregressive conditional heteroskedasticity (ARCH) refers to the variance of the error term depending on previous errors. A test can be conducted to identify ARCH(1) errors.
Linear regression should be used cautiously when time series have unit roots or are not cointegrated. The Dickey-Fuller test can determine whether time series are cointegrated.
Machine learning methods are increasingly being used in various stages of the investment management value chain. These methods aim to extract knowledge from large datasets by identifying underlying patterns and making predictions without human intervention.
Supervised learning relies on labeled training data, with observed inputs (X's or features) and associated outputs (Y or target). It can be categorized into regression, which predicts continuous target variables, and classification, which deals with categorical or ordinal target variables.
Unsupervised learning algorithms, on the other hand, work with unlabeled data and infer relationships between features, summarize them, or reveal underlying structures in the data.
Common applications of unsupervised learning include dimension reduction and clustering.
Deep learning utilizes sophisticated algorithms and neural networks to address complex tasks such as image classification and natural language processing. Reinforcement learning involves a computer learning through interaction with itself or data generated by the same algorithm.
Generalization refers to an ML model's ability to maintain its predictive power when applied to new, unseen data. Overfitting is a common issue where models are overly tailored to the training data and fail to generalize. Bias error measures how well a model fits the training data, variance error quantifies the variation of model results with new data, and base error arises from randomness in the data. Out-of-sample error combines bias, variance, and base errors.
To address the holdout sample problem, k-fold cross-validation shuffles and divides the data into k subsets, using k-1 subsets for training and one subset for validation. Regularization methods reduce statistical variability in high-dimensional data estimation or prediction by reducing model complexity.
LASSO is a popular penalized regression technique that assigns a penalty based on the absolute values of regression coefficients, promoting feature selection.
Support vector machines (SVM) aim to find the optimal hyperplane for classification tasks. K-nearest neighbor (KNN) is a supervised learning method used for classification by comparing similarities between new observations and existing data points. Classification and regression trees (CART) are utilized for predicting categorical or continuous target variables.
Ensemble learning combines predictions from multiple models to improve accuracy and stability. Random forest classifiers consist of decision trees generated through bagging or random feature reduction. Principal components analysis (PCA) reduces correlated features into uncorrelated composite variables. K-means clustering partitions observations into non-overlapping clusters, and hierarchical clustering builds a hierarchy of clusters using bottom-up (agglomerative) or top-down (divisive) approaches.
Neural networks consist of interconnected nodes and are used for tasks involving non-linearities and complex interactions. Deep neural networks (DNNs) have multiple hidden layers and are at the forefront of artificial intelligence. Reinforcement learning involves an agent maximizing rewards over time while considering environmental constraints.
This module explores the key steps involved in big data projects that incorporate the development of machine learning (ML) models, particularly those that combine textual big data with structured inputs.
Big data, known for its volume, velocity, variety, and potential veracity issues, offers significant potential for fintech applications, including those related to investment management.
The process of building ML models traditionally involves several major steps, including problem conceptualization, data collection, data preparation and wrangling, data exploration, and model training.
When it comes to building textual ML models, the initial steps differ somewhat from traditional models and involve text problem formulation, text curation, text preparation and wrangling, and text exploration.
For structured data, data preparation and wrangling tasks include data cleansing, addressing issues such as incompleteness, invalidity, inaccuracy, inconsistency, non-uniformity, and duplication errors.
Preprocessing for structured data typically involves transformations such as extraction, aggregation, filtration, selection, and conversion.
Preparing and wrangling unstructured text data involves specific text-related cleansing and preprocessing tasks, such as removing HTML tags, punctuation, numbers, and white spaces.
Text preprocessing requires normalization techniques such as lowercasing, removing stop words, stemming, lemmatization, creating bag-of-words (BOW) and n-grams, and organizing them into a document term matrix (DTM).
Data exploration encompasses techniques like exploratory data analysis, feature selection, and feature engineering, where visualization tools like histograms, box plots, scatterplots, and word clouds facilitate insights and collaboration among team members.
Feature selection methods for text data include term frequency, document frequency, chi-square test, and mutual information measures, while feature engineering involves converting numbers into tokens, creating n-grams, and utilizing named entity recognition and parts of speech to engineer new feature variables.
The steps for model training, including method selection, performance evaluation, and model tuning, often remain consistent across structured and unstructured data projects.
Model selection depends on factors such as labeled or unlabeled data, data types (numerical, text, image, speech), and dataset size.
Model performance evaluation involves error analysis techniques like confusion matrices, receiver operating characteristics (ROC) analysis, and root mean square error (RMSE) calculations.
Confusion matrices help determine true positives, true negatives, false positives, and false negatives, while metrics like accuracy, F1 score, precision, and recall assess model performance.
ROC analysis compares ROC curves and area under the curve (AUC) to evaluate model performance, with more convex curves and higher AUC indicating better performance.
Model tuning involves managing the trade-off between bias error (underfitting) and variance error (overfitting), often visualized through fitting curves of in-sample and out-of-sample errors against model complexity.
In real-world big data projects involving sentiment analysis of financial text for specific stocks, the text data is transformed into structured data to populate the DTM, which serves as input for the ML algorithm.
To derive term frequency (TF) and TF-IDF at the sentence level, various frequency measures such as TotalWordsInSentence, TotalWordCount, TermFrequency (Collection Level), WordCountInSentence, SentenceCountWithWord, Document Frequency, and Inverse Document Frequency should be used to create a term frequency measures table.
This reading emphasizes the interconnectedness between the state of the economy and financial market activity. Financial markets serve as platforms where savers and investors connect, enabling savers to postpone current consumption for future consumption, governments to raise capital for societal needs, and corporations to access funds for profitable investments.
These activities contribute to economic growth and employment opportunities. Financial instruments, such as bonds and equities, represent claims on the underlying economy, highlighting the significant connection between economic decisions and the prices of these instruments.
The purpose of this reading is to identify and explain the relationship between the real economy and financial markets, and how economic analysis can be used to value individual financial market securities as well as collections of securities, such as market indexes.
The reading begins by introducing the fundamental pricing equation for all financial instruments. It then delves into the relationship between the economy and real default-free debt.
The analysis further extends to examine how the economy influences the prices of nominal default-free debt, credit risky debt (such as corporate bonds), publicly traded equities, and commercial real estate.
This module explores important considerations for ETF investors, including understanding how ETFs function and trade, their tax-efficient attributes, and their key portfolio applications.
To summarize:
ETFs rely on a creation/redemption mechanism facilitated by authorized participants (APs), who have the exclusive ability to create or redeem new ETF shares. Moreover, these ETFs are traded on both primary and secondary markets, with end investors engaging in secondary market trading similar to stocks.
When evaluating ETF performance, it is more useful to examine holding period performance deviations (tracking differences) rather than solely focusing on the standard deviation of daily return differences (tracking error).
Tracking differences arise due to various factors, such as fees and expenses, representative sampling, index changes, regulatory and tax requirements, and fund accounting practices.
From a tax perspective, ETFs are generally treated in a manner similar to the securities they hold. They offer advantages over traditional mutual funds, as portfolio trading is typically not required when investors enter or exit an ETF. Additionally, the creation/redemption process enables ETFs to be more tax-efficient, as issuers can strategically redeem low-cost-basis securities to minimize future taxable gains. It is crucial to consider the unique ETF taxation issues prevalent in local markets.
ETF bid-ask spreads exhibit variations based on trade size and factors such as creation/redemption costs, bid-ask spreads of underlying securities, hedging or carry positions, and market makers' profit spreads. In the case of fixed-income ETFs, bid-ask spreads tend to be wider owing to the nature of dealer markets and the complexity of hedging.
Conversely, ETFs holding international stocks experience tighter bid-ask spreads when the underlying security markets are open for trading.
Premiums and discounts can occur within ETFs, representing the disparity between the exchange price of the ETF and the fund's calculated NAV based on underlying security prices. These differences can be attributed to time lags, liquidity, and supply-demand dynamics.
The costs associated with ETF ownership encompass various elements, including fund management fees, tracking error, portfolio turnover, trading costs (such as commissions and bid-ask spreads), taxable gains/losses, and security lending. The impact of these costs varies depending on the holding period, with one-time trading costs assuming greater significance for shorter-term tactical ETF traders, while management fees and turnover become more pronounced for longer-term buy-and-hold investors.
It is important to differentiate ETFs from exchange-traded notes (ETNs), as ETNs entail unique counterparty risks, and swap-based ETFs may also involve counterparty risk.
Additionally, ETF closures can give rise to unexpected tax liabilities.
In the realm of portfolio management, ETFs serve diverse purposes, including providing core asset class exposure, facilitating tactical strategies, implementing factor-based strategies, and contributing to portfolio efficiency applications like rebalancing, liquidity management, and transitions. ETFs are widely embraced by various types of investors seeking exposure to asset classes, equity style benchmarks, fixed-income categories, and commodities.
Thematic ETFs find utility in more targeted active portfolio management, while systematic strategies rely on rules-based benchmarks to access factors such as size, value, momentum, or quality. ETFs also find frequent application in multi-asset and global asset allocation strategies.
To effectively harness the potential of ETFs, investors should conduct comprehensive research and diligently evaluate factors such as the ETF's index construction methodology, costs, risks, and performance history.
This module covers multifactor models, which are essential tools for quantitative portfolio management.
These models play a crucial role in constructing portfolios, analyzing risk and return, and attributing sources of performance.
Unlike single-factor approaches, multifactor models provide a more detailed and nuanced view of risk. They describe asset returns by considering their exposure to a set of factors, including systematic factors that explain the average returns of various risky assets.
These factors represent priced risks that investors demand additional compensation for. The arbitrage pricing theory (APT) offers a framework where asset returns are linearly related to their risk with respect to these factors, making fewer assumptions compared to the Capital Asset Pricing Model (CAPM).
Multifactor models can be categorized into macroeconomic factor models, fundamental factor models, and statistical factor models, depending on the nature of the factors used.
Macroeconomic factor models focus on surprises in macroeconomic variables that significantly influence asset class returns, while fundamental factor models consider attributes of stocks or companies that explain cross-sectional differences in stock prices. Statistical factor models utilize statistical methods to identify portfolios that best explain historical returns based on either covariances or variances.
These multifactor models find applications in return attribution, risk attribution, portfolio construction, and strategic investment decisions. Factor portfolios, which have unit sensitivity to a particular factor, are useful in these models.
Active return, active risk (also known as tracking error), and information ratio (mean active return divided by active risk) are important performance metrics in evaluating multifactor models.
Additionally, multifactor models facilitate the construction of portfolios that track market indexes or alternative indexes.
In summary, multifactor models provide a comprehensive framework for quantitative portfolio management, enabling investors to gain insights into risk, construct portfolios, and make strategic investment decisions by considering multiple sources of systematic risk.
This reading discusses market risk management models and explores various techniques used to manage risk arising from market fluctuations.
The key points covered are as follows:
The concept of Value at Risk (VaR) is introduced, which estimates the minimum expected loss, either in currency units or as a percentage of portfolio value, over a certain time period and under assumed market conditions.
VaR estimation involves decomposing portfolio performance into risk factors.
Three methods of estimating VaR are discussed: the parametric method, the historical simulation method, and the Monte Carlo simulation method.
The parametric method provides a VaR estimate based on a normal distribution, considering expected returns, variances, and covariances of portfolio components. However, it may not be accurate for portfolios with non-normally distributed returns, such as those containing options.
The historical simulation method utilizes historical return data on the portfolio's holdings and allocation. It incorporates actual events but is reliant on the assumption that the future resembles the past.
The Monte Carlo simulation method requires specifying a statistical distribution of returns and generating random outcomes. It offers flexibility but can be complex and time-consuming.
It is important to note that there is no universally correct method for estimating VaR.
VaR offers advantages such as simplicity, ease of understanding and communication, capturing comprehensive information in a single measure, facilitating risk comparison across asset classes and portfolios, supporting capital allocation decisions, performance evaluation, and regulatory acceptance.
However, VaR has limitations, including subjectivity, sensitivity to discretionary choices, potential underestimation of extreme events, failure to account for liquidity and correlation risks, vulnerability to trending or volatility regimes, misconception as a worst-case scenario, oversimplification of risk, and focus on the left tail.
Variations and extensions of VaR, such as conditional VaR (CVaR), incremental VaR (IVaR), and marginal VaR (MVaR), provide additional useful information.
Conditional VaR measures the average loss conditional on exceeding the VaR cutoff.
Incremental VaR quantifies the change in portfolio VaR resulting from adding, deleting, or adjusting position sizes.
MVaR assesses the change in portfolio VaR due to small changes in positions and helps determine asset contributions to overall VaR in a diversified portfolio.
Ex ante tracking error measures the potential deviation of an investment portfolio's performance from its benchmark.
Sensitivity measures, such as beta, duration, convexity, delta, gamma, and vega, quantify how a security or portfolio reacts to changes in specific risk factors. However, they do not indicate the magnitude of potential losses.
Risk managers can employ sensitivity measures to gain a comprehensive understanding of portfolio sensitivity.
Stress tests subject a portfolio to extreme negative stress in specific exposures.
Scenario measures, including stress tests, evaluate portfolio performance under high-stress market conditions. Historical scenarios utilize past financial market history, while hypothetical scenarios model extreme movements and co-movements not yet experienced.
Reverse stress testing involves stressing a portfolio's significant exposures.
Sensitivity and scenario risk measures can complement VaR, as they do not rely solely on historical data and can overcome assumptions of normal distributions.
Constraints, such as risk budgets, position limits, scenario limits, stop-loss limits, and capital allocation, are widely used in risk management.
Risk measurements and constraints, along with leverage, risk factor exposure, accounting, and regulatory requirements, vary across market participants, such as banks, asset managers, pension funds, property and casualty insurers, and life insurers.
These risk measures help assess liquidity and asset/liability mismatch, potential losses, leverage ratios, interest rate sensitivities, economic capital, surplus at risk, asset allocation ranges, and the impact of catastrophic events on market and insurance risks.
In this reading, the candidate will explore various techniques used to enhance backtesting in the investment industry.
One such technique is rolling-window backtesting, which aims to approximate real-life investment processes and understand the risk-return trade-off of investment strategies.
The process involves specifying investment hypotheses and goals, determining strategy rules and processes, forming an investment portfolio, periodically rebalancing it, and analyzing performance and risk profiles.
Rolling-window backtesting is implemented using a rolling-window framework. Researchers calibrate factors or trade signals based on the rolling window, periodically rebalance the portfolio, and track its performance over time.
This methodology serves as a proxy for actual investing. However, it is important to consider behavioral issues such as survivorship bias and look-ahead bias when conducting backtesting.
Asset returns often exhibit characteristics such as negative skewness, excess kurtosis (fat tails), and tail dependence, which deviate from a normal distribution.
These characteristics introduce randomness and downside risk in asset returns that may not be fully captured by standard rolling-window backtesting. To address this, additional techniques can be employed.
Scenario analysis helps investors understand how an investment strategy performs in different structural regimes, accounting for potential structural breaks in financial data.
On the other hand, historical simulation, while relatively straightforward, shares similar pros and cons to rolling-window backtesting. It assumes that the historical data distribution is sufficient to represent future uncertainty and often uses bootstrapping techniques.
Monte Carlo simulation, a more sophisticated technique, requires selecting the statistical distribution of decision variables or return drivers.
The multivariate normal distribution is commonly used due to its simplicity. However, it fails to capture negative skewness and fat tails observed in factor and asset returns.
Complementing Monte Carlo simulation, sensitivity analysis explores the effects of changes in input variables on the target variable and risk profiles. This analysis helps uncover limitations in the conventional approach that assumes a multivariate normal distribution.
Another approach, the multivariate skewed t-distribution, considers skewness and kurtosis but requires estimating more parameters, which increases the likelihood of larger estimation errors.
These techniques provide investors with a more comprehensive understanding of the limitations and potential improvements in backtesting methodologies.
The reading emphasizes the fundamental relationship between financial asset prices and the underlying economy. Financial assets represent ownership in the real economy and are influenced by the decisions of economic agents regarding consumption and saving.
The market value of financial securities is determined by calculating the present value of expected future cash flows. Factors such as expected inflation, real interest rates, and risk premiums are taken into account when discounting these cash flows. The business cycle has an impact on these factors, thereby affecting the market value of financial instruments.
Real short-term interest rates are positively correlated with the long-term growth rate and volatility of the economy. Inflation expectations and the difference between actual and potential output also play a role in determining the policy rate set by central banks. Short-term nominal interest rates are influenced by both real interest rates and inflation expectations.
Investors in bonds, who are risk-averse, demand a risk premium as compensation for investing in government bonds. The term structure of interest rates is shaped by a combination of short-term rates, inflation expectations, and risk premiums. The yield differential between conventional government bonds and index-linked bonds is driven by expectations of future inflation and the uncertainty perceived by investors.
The measured credit spread, which represents the additional risk associated with corporate bonds compared to government bonds, tends to increase during economic downturns and decrease during periods of strong growth. Equity investments carry higher uncertainty in future cash flows, leading to a larger equity risk premium compared to credit risk premiums. Economic weakness amplifies the uncertainty and raises the equity risk premium further.
The price-to-earnings ratio (P/E) of equities tends to rise during economic expansions and decline during recessions. Various factors, including real interest rates, equity risk premium, expected earnings growth, and operating/financial risk, contribute to a "high" P/E ratio.
Commercial property investments are evaluated in a similar manner to equities, focusing on cash flows derived from rent. Uncertainty surrounding these cash flows intensifies during economic downturns, resulting in a higher risk premium for commercial property investments. Commercial property prices are influenced by the business cycle and investors typically demand a relatively high risk premium due to the limited ability of commercial property to provide a hedge against economic downturns and the illiquid nature of these investments.
This reading delves into the essential concepts and principles associated with active portfolio management, which aims to enhance value compared to a benchmark portfolio by employing risk and return principles from mean-variance portfolio theory.
The following key points are discussed:
One of the key concepts is value added, which signifies the disparity between the returns of a managed portfolio and those of a passive benchmark portfolio.
Positive value added is anticipated beforehand to justify active management.
Active weights within the portfolio reflect the variations in asset weights between the managed portfolio and benchmark portfolio. Positive active weights indicate overweighting, negative active weights indicate underweighting, and the sum of active weights equals zero.
Positive value added is achieved when assets with positive active weights generate higher returns than assets with negative active weights. This correlation between active asset returns and active weights determines the value added.
Value added can originate from diverse decisions, including security selection, asset class allocation, economic sector weightings, and geographic weights.
To evaluate actively managed portfolios, two important metrics are discussed: the Sharpe ratio and the information ratio. The Sharpe ratio assesses the reward per unit of risk in absolute returns, while the information ratio evaluates the reward per unit of risk in benchmark-relative returns.
Portfolios with higher information ratios tend to have higher Sharpe ratios. The optimal level of active management that maximizes a portfolio's Sharpe ratio depends on the assumed forecasting accuracy or ex ante information coefficient.
Adjusting the active risk of a strategy and the total volatility of a portfolio are also discussed. The active risk can be adjusted by combining it with a benchmark position, and the total volatility can be adjusted by incorporating cash.
The fundamental law of active portfolio management furnishes a framework for evaluating investment strategies. It takes into account factors such as skill, portfolio structure, strategy breadth, and aggressiveness.
The fundamental law finds application in various contexts, including the selection of country equity markets and the timing of credit and duration exposures in fixed-income funds.
However, it's important to note the limitations of the fundamental law, which include uncertainties surrounding the ex ante information coefficient and the conceptual definition of strategy breadth.
The text discusses exchange rates and their complexities. It mentions that there is no simple framework for valuing exchange rates, but most economists believe in the existence of an equilibrium level that currencies gravitate towards in the long run.
The module points out the importance of forward exchange rates which play a crucial role in understanding and predicting future currency values. Unlike spot exchange rates, which apply to immediate currency trades, forward exchange rates are quoted for trades that will be settled at a specified future date. These rates are expressed in terms of points added to the spot exchange rate.
The relationship between forward exchange rates and spot exchange rates can provide valuable insights into market expectations and investor sentiment. When the forward exchange rate is higher than the spot exchange rate, it indicates a forward premium for the base currency, suggesting that market participants expect the base currency to appreciate in the future. Conversely, when the forward exchange rate is lower than the spot exchange rate, it signifies a forward discount, indicating expectations of depreciation for the base currency.
The determination of forward exchange rates is influenced by various factors, with the most significant being the interest rate differential between two currencies. Generally, higher interest rates in one country compared to another will result in a forward premium for the higher-yielding currency and a forward discount for the lower-yielding currency. The relationship between interest rate differentials and forward exchange rates reflects the concept of covered interest rate parity, which suggests that an investor can achieve the same return from investing domestically or in a foreign currency while hedging against exchange rate risk.
The time to maturity also affects forward exchange rates, as the points added to the spot rate tend to be proportional to the time remaining until the settlement date. Longer maturities typically entail higher forward points, reflecting the increased uncertainty and risk associated with longer-term currency projections.
It is important to note that while forward exchange rates can provide valuable insights, they are not infallible predictors of future spot exchange rates. Market conditions, economic factors, and unforeseen events can lead to deviations between the forward and actual exchange rates. Nonetheless, forward rates remain an essential tool for businesses, investors, and speculators to manage and hedge against future currency fluctuations.
Misalignments in exchange rates gradually build up over time and can lead to economic imbalances. Factors such as monetary and fiscal policies, current account trends, capital flows, and government intervention affect exchange rate movements.
Key points highlighted include the difference between spot and forward exchange rates, bid and offer prices quoted by market makers, bid-offer spreads, and the concept of arbitrage. The text also covers international parity conditions, such as purchasing power parity, interest rate parity, and Fisher effect, and their impact on exchange rates. It notes that these conditions rarely hold in the short term but tend to hold over longer horizons.
The relationship between monetary policy, fiscal policy, and exchange rates is discussed, including how tightening or easing monetary policy can affect currency value. The Mundell-Fleming model and the monetary model of exchange rate determination are mentioned in this context. The impact of fiscal policy on interest rates, capital flows, and trade balance is also explored.
The portfolio balance model suggests that government debt and budget deficits can influence exchange rates if investors are compensated with higher returns or currency depreciation. Capital inflows can lead to boom-like conditions and currency overvaluation, while capital controls may be used to manage exchange rates and prevent crises. The role of government policies in influencing exchange rates is emphasized, particularly in emerging markets with large foreign exchange reserves.
The text concludes by discussing factors associated with currency crises, such as capital market liberalization, foreign capital inflows, banking crises, fixed exchange rates, declining foreign exchange reserves, deviations from historical means, deteriorating terms of trade, money growth, and inflation.
This reading explores the factors that determine the long-term growth trend in an economy. It emphasizes the importance of assessing both near-term and sustainable growth rates when developing global investment strategies.
The sustainable rate of economic growth is measured by the increase in an economy's productive capacity or potential GDP. Real GDP growth reflects
the overall expansion of the economy, while per capita GDP measures the standard of living in each country. Investment opportunities vary among countries due to differences in real GDP growth and
per capita GDP.
Equity markets respond to anticipated earnings growth, with higher sustainable economic growth generally leading to increased earnings growth and equity valuations.
However, long-term earnings growth is constrained by the growth in potential GDP, which is influenced by labor productivity. Technological advances
and improvements in management methods play a significant role in enhancing productivity and total factor productivity (TFP). TFP represents the residual component of growth after considering the
contributions of explicit factors such as labor and capital.
The neoclassical model and endogenous growth theory offer different perspectives on economic growth. The neoclassical model suggests that sustained growth depends on population growth, progress in TFP, and labor's share of income, with diminishing returns to capital.
In contrast, endogenous growth theory incorporates technological progress within the model and allows for constant or increasing returns to capital.
Expenditures on research and development (R&D) and human capital have positive externalities, benefiting the overall economy beyond individual
companies.
Opening an economy to financial and trade flows has a significant impact on economic growth. Evidence suggests that more open and trade-oriented economies tend to experience faster growth. The convergence hypothesis predicts that developing countries should have higher rates of productivity and GDP growth, leading to a narrowing per capita GDP gap with developed economies.
However, factors such as low investment and savings rates, lack of property rights, political instability, poor education and health, trade
restrictions, and unfavorable tax and regulatory policies can hinder convergence.
Understanding these factors and their interplay is crucial for investors seeking to make informed decisions about long-term economic growth and global investment opportunities.
Having knowledge of regulation is crucial because it can have far-reaching and significant effects, impacting both the broader economy and individual entities and securities.
When analyzing regulation, it is important to consider its origin from various sources and its application in different areas. A framework that encompasses different types of regulators, regulatory approaches, and areas of impact can be helpful in understanding the potential effects of new regulations and their implications for different entities.
It is common for multiple regulators to address specific issues, each with their own objectives and preferred regulatory tools. When developing regulations, regulators should carefully consider the costs and benefits involved.
Assessing the net regulatory burden, which takes into account the private costs and benefits of regulation, is also an important aspect of regulatory analysis. However, evaluating the costs and benefits of regulation can be challenging.
Key points from the reading include the necessity of regulation to address informational frictions and externalities, the extensive regulation of securities markets and financial institutions due to the potential consequences of failures in the financial system, and the focus of regulators on areas such as prudential supervision, financial stability, market integrity, and economic growth.
Regulatory competition refers to the competition among regulatory bodies to attract specific entities through the use of regulation. Given the wide scope of regulation, it is useful to have a framework that identifies potential areas of regulation, both current and anticipated, which may impact the entity under consideration.
Regulation is typically enacted by legislative bodies, issued by regulatory bodies in the form of administrative regulations, and interpreted by courts, resulting in judicial law. The interdependence and potentially conflicting objectives among regulators are important factors to consider for regulators, regulated entities, and those assessing the effects of regulation.
Regulatory capture occurs when regulation is designed to benefit the interests of regulated entities. Regulators have responsibilities in both substantive and procedural laws, addressing the rights, responsibilities, relationships among entities, and the protection and enforcement of these laws.
Entities may engage in regulatory arbitrage, utilizing differences in economic substance, regulatory interpretation, or regulatory regimes to their advantage.
Regulators have a range of tools at their disposal, including regulatory mandates, behavior restrictions, provision of public goods, and public financing of private projects. It is important to choose regulatory tools that maintain a stable regulatory environment, characterized by predictability, effectiveness, time consistency, and enforceability.
To assess regulation and its outcomes, regulators should conduct ongoing cost-benefit analyses, develop improved measurement techniques for regulatory outcomes, and apply economic principles as guiding factors. Analysts should also consider the net regulatory burden on the entity of interest as a crucial aspect to evaluate.
Intercompany investments have a significant impact on business operations and pose challenges for analysts when assessing company performance.
There are five main forms of investments in other companies: investments in financial assets, investments in associates, joint ventures, business combinations, and investments in special purpose and variable interest entities.
Investments in financial assets refer to investments where the investor has no significant influence. They can be measured and reported based on fair value through profit or loss, fair value through other comprehensive income, or amortized cost. Both IFRS and US GAAP treat investments in financial assets similarly.
Investments in associates and joint ventures involve situations where the investor has significant influence but not control over the investee's business activities.
Both IFRS and US GAAP require the equity method of accounting for these investments. Under the equity method, income is recognized as earned rather than when dividends are received. The equity investment is carried at cost, adjusted for post-acquisition income and dividends. It is reported as a single line item on the balance sheet and income statement.
Business combinations are accounted for using the acquisition method under both IFRS and US GAAP. The fair value of the consideration given is used to measure the identifiable assets and liabilities acquired in the combination.
Goodwill, representing the difference between the acquisition value and the fair value of the target's net assets, is not amortized. Instead, it is evaluated for impairment at least annually, with impairment losses reported on the income statement. IFRS and US GAAP use different approaches to determine and measure impairment losses.
When the acquiring company owns less than 100% of the target company, the non-controlling (minority) shareholders' interests are reported on the consolidated financial statements. IFRS allows the non-controlling interest to be measured at fair value (full goodwill) or the proportionate share of the acquiree's net assets (partial goodwill). US GAAP requires the non-controlling interest to be measured at fair value (full goodwill).
Consolidated financial statements are prepared in each reporting period to present the combined financial information of the parent company and its subsidiaries.
Special purpose entities (SPEs) and variable interest entities (VIEs) must be consolidated by the entity expected to absorb the majority of expected losses or receive the majority of expected residual benefits.
The reading delves into two types of employee compensation: post-employment benefits and share-based compensation. Despite their differences, both forms go beyond salaries and pose complex challenges in terms of valuation, accounting, and reporting.
Although there is a convergence between IFRS and US GAAP accounting standards, variations in social systems, laws, and regulations across countries can result in discrepancies in pension and share-based compensation plans, impacting a company's financial reports and earnings.
Defined contribution pension plans establish the contribution amount, with the eventual pension benefit depending on the value of plan assets at retirement.
These plans do not generate liabilities, making balance sheet reporting less significant. Conversely, defined benefit pension plans specify the pension benefit based on factors like tenure and final salary. Such plans are funded by the company through a separate pension trust, with funding requirements differing across countries.
Both IFRS and US GAAP mandate reporting a pension liability or asset on the balance sheet, calculated as the projected benefit obligation minus the fair value of plan assets. However, there are limitations on the amount of pension assets that can be reported.
Under IFRS, components of periodic pension cost are recognized in the profit and loss (P&L) statement. Under US GAAP, these components include current service costs and interest expenses in the P&L, while other components are recognized in other comprehensive income (OCI) and gradually amortized to future P&L.
Estimating future obligations for defined benefit pension plans and post-employment benefits relies on various assumptions, such as discount rates, salary increases, expected returns on plan assets, and healthcare cost inflation.
These assumptions significantly impact the estimated obligations.
Employee compensation packages serve different purposes, such as providing liquidity, retaining employees, and offering incentives. Salary, bonuses, and share-based compensation are common elements.
Share-based compensation, including stocks and stock options, aligns employees' interests with those of shareholders without necessitating immediate cash outflows.
Both IFRS and US GAAP require reporting the fair value of share-based compensation, and the chosen valuation technique or option pricing model is disclosed.
Assumptions such as exercise price, stock price volatility, award duration, forfeited options, dividend yield, and risk-free interest rate influence the estimated fair value and compensation expense, with subjective assumptions potentially exerting a significant impact.
The translation of foreign currency amounts is a significant accounting issue for multinational companies. Fluctuations in foreign exchange rates result in changes in the functional currency values of foreign currency assets, liabilities, and subsidiaries over time.
These changes give rise to foreign exchange differences that must be reflected in a company's financial statements. The main accounting issues in managing multinational operations are how to measure these foreign exchange differences and whether to include them in the calculation of net income.
The local currency is the national currency of the entity's location, while the functional currency is the currency of the primary economic environment where the entity operates. Usually, the local currency is also the functional currency.
Any currency other than the functional currency is considered a foreign currency for accounting purposes. The presentation currency is the currency in which financial statement amounts are presented, often being the same as the local currency.
When a sale or purchase is denominated in a foreign currency, the revenue or inventory and the foreign currency accounts receivable or accounts payable are translated into the seller's or buyer's functional currency using the exchange rate on the transaction date.
Any changes in the functional currency value of the foreign currency accounts receivable or accounts payable between the transaction date and the settlement date are recognized as foreign currency transaction gains or losses in net income.
If a balance sheet date falls between the transaction date and the settlement date, the foreign currency accounts receivable or accounts payable are translated at the exchange rate on the balance sheet date.
The change in the functional currency value of these accounts is recognized as a foreign currency transaction gain or loss in income. It's important to note that these gains and losses are unrealized at the time of recognition and may or may not be realized when the transactions are settled.
Foreign currency transaction gains occur when an entity has a foreign currency receivable and the foreign currency strengthens or has a foreign currency payable and the foreign currency weakens. Conversely, foreign currency transaction losses arise when an entity has a foreign currency receivable and the foreign currency weakens or has a foreign currency payable and the foreign currency strengthens.
Companies are required to disclose the net foreign currency gain or loss included in income. They can choose to report foreign currency transaction gains and losses as part of operating income or as a separate component of non-operating income. Differences in reporting methods can affect the comparability of operating profit and operating profit margin between companies.
In preparing consolidated financial statements, the foreign currency financial statements of foreign operations need to be translated into the presentation currency of the parent company.
Two different translation methods are commonly used.
Under the current rate method, assets, liabilities, and income items are translated at different exchange rates, while under the temporal method, different exchange rates are used for monetary and non-monetary items.
The choice of translation method and the recognition of translation adjustments as income or as a separate component of equity depend on the foreign operation's functional currency.
Disclosure requirements include reporting the total amount of translation gain or loss in income and the amount of translation adjustment in stockholders' equity. There is no requirement to separately disclose gains or losses from foreign currency transactions and gains or losses from the application of the temporal method.
It's important to understand the impact of foreign currency translation on a company's financial results, regardless of whether the financial statements are prepared in accordance with International Financial Reporting Standards (IFRS) or US Generally Accepted Accounting Principles (GAAP).
While there are no major differences in foreign currency translation rules between IFRS and US GAAP, treatment of foreign operations in highly inflationary countries may vary.
Analysts can gather information about the tax impact of multinational operations from companies' disclosures on effective tax rates.
Changes in exchange rates between the reporting currency and the currency in which sales are made can affect a multinational company's sales growth, but growth resulting from changes in volume or price is generally considered more sustainable than growth from currency exchange rate moves.
Financial institutions, due to their systemic importance, are subject to heavy regulation to mitigate systemic risk.
The recent failures of SVB and First Republic Bank illustrate the importance of monitoring these large banks, assessing their financial health and preventing such events from happening
again.
The Basel Committee, comprising representatives from central banks and bank supervisors worldwide, establishes international regulatory standards for banks, including minimum capital, liquidity, and stable funding requirements.
Financial stability is a key focus of international organizations such as the Financial Stability Board, International Association of Insurance Supervisors, International Association of Deposit Insurers, and International Organization of Securities Commissions.
Unlike manufacturing or merchandising companies, financial institutions primarily hold financial assets, exposing them to credit risk, liquidity risk, market risk, and interest rate risk. Their asset values closely align with fair market values.
The CAMELS approach is widely used to analyze banks, considering Capital adequacy, Asset quality, Management capabilities, Earnings sufficiency, Liquidity position, and Sensitivity to market risk. Capital adequacy ensures the bank can absorb losses without severe financial damage, while asset quality and risk management play a crucial role. Management capabilities, earnings, liquidity, and market risk sensitivity are also evaluated.
Other important attributes in analyzing banks include government support, corporate culture, competitive environment, off-balance-sheet items, segment information, currency exposure, and risk disclosures. Insurance companies can be categorized as property and casualty (P&C) or life and health (L&H). They generate revenue from premiums and investment income. P&C policies have shorter terms and more variable claims, while L&H policies are longer-term and more predictable.
In analyzing insurance companies, key areas to consider include their business profile, earnings characteristics, investment returns, liquidity, and capitalization. Profitability analysis for P&C companies includes evaluating loss reserves and the combined ratio.
Assessing the quality of financial reports is a crucial analytical skill that involves the evaluation of reporting quality and results quality.
The quality of financial reporting can vary, ranging from high to low, and can be influenced by various factors such as revenue and expense recognition, classification on the statement of cash flows, and the recognition, classification, and measurement of assets and liabilities on the balance sheet.
To evaluate financial reporting quality, several steps are typically followed.
These steps include gaining an understanding of the company's business and industry, comparing financial statements across different periods, assessing accounting policies, conducting financial ratio analysis, examining the statement of cash flows, reviewing risk disclosures, and analyzing management compensation and insider transactions. Earnings of high quality, assuming high reporting quality, have a greater positive impact on a company's value compared to earnings of low quality.
Indicators used to assess earnings quality encompass recurring earnings, earnings persistence, outperforming benchmarks, and identifying poor-quality earnings through enforcement actions and restatements.
Earnings with a significant accrual component tend to be less persistent, and companies consistently meeting or narrowly beating benchmarks may raise concerns about the quality of their earnings. Accounting misconduct often involves problems with revenue recognition or misrepresentation of expenses.
Financial results quality can be evaluated through bankruptcy prediction models, which measure the likelihood of default or bankruptcy. Similarly, high-quality reported cash flows indicate satisfactory economic performance and reliable reporting, while low-quality cash flows may reflect genuine underperformance or distorted representation of economic reality.
Regarding the balance sheet, financial reporting quality is indicated by completeness, unbiased measurement, and clear presentation.
The presence of off-balance-sheet debt compromises completeness, and unbiased measurement is particularly critical when subjective valuation of assets and liabilities is involved.
Financial statements can provide insights into financial or operational risks, and the management commentary (MD&A) can furnish valuable information about risk exposures and risk management strategies. Required disclosures, such as changes in senior management or delays in financial reporting, can serve as warning signs of potential issues with financial reporting quality.
The financial press can also bring to light previously unidentified reporting problems, necessitating further investigation by analysts.
This module provides a comprehensive overview of equity valuation. It covers the scope of valuation, outlines the valuation process, introduces various valuation concepts and models, discusses the role and responsibilities of analysts in conducting valuations, and describes the elements of an effective research report for communicating valuation analysis.
The module emphasizes that valuation involves estimating the value of an asset based on variables related to future investment returns or by comparing it with similar assets. Additionally, it introduces the concept of intrinsic value, which represents the value of an asset based on a complete understanding of its investment characteristics. Furthermore, the module highlights the assumption that market prices can deviate from intrinsic value, forming the basis for active investing.
The valuation process consists of five steps: understanding the business, forecasting company performance, selecting the appropriate valuation model, converting forecasts to a valuation, and applying the results. It emphasizes the importance of understanding the industry prospects, competitive position, and corporate strategies when evaluating a business. The module discusses both top-down and bottom-up forecasting approaches and stresses the need for selecting the most suitable valuation approach based on the company's characteristics and the analyst's purpose and perspective.
The module introduces absolute valuation models and relative valuation models as the two broad categories of valuation models. Absolute valuation models estimate the intrinsic value of an asset, while relative valuation models compare its value to that of other assets. The module explains the importance of sensitivity analysis and situational adjustments in converting forecasts into valuations.
Effective communication of valuation conclusions is crucial. Therefore, the module describes the key elements of an impactful research report. These elements include timely and well-researched information, clear and concise language, identification of key assumptions, differentiation between facts and opinions, internal consistency of analysis, forecasts, and valuations, provision of sufficient information for critique, disclosure of risk factors, and acknowledgment of potential conflicts of interest.
Finally, the module emphasizes the importance of analysts adhering to standards of competence and conduct when performing valuations. Specifically, CFA Institute members are required to uphold the CFA Institute Code of Ethics and relevant Standards of Professional Conduct, ensuring the delivery of meaningful and ethical valuation analysis.
Discounted cash flow (DCF) models are commonly used by analysts to evaluate the worth of companies. These models rely on two types of cash flows: free cash flow to the firm (FCFF) and free cash flow to equity (FCFE). FCFF represents the cash available to all investors, while FCFE represents the cash available to common stockholders.
Analysts prefer using free cash flow in certain situations. These include when a company doesn't pay dividends, when dividend payments deviate significantly from the company's capacity to pay dividends, when free cash flows align with profitability within a reasonable forecast period, or when taking a controlling perspective.
The FCFF valuation approach estimates the firm's value by discounting the future FCFF using the weighted average cost of capital (WACC). The equity value is derived by subtracting the market value of debt from the firm's value. By dividing the total equity value by the outstanding shares, the value per share can be determined.
The WACC formula incorporates the cost of debt and equity, considering their market values and the tax rate. If FCFF is growing at a constant rate, the firm's value can be calculated using the provided formula.
Alternatively, the FCFE valuation approach calculates the equity value by discounting FCFE at the required rate of return on equity. Dividing the total equity value by the number of outstanding shares gives the value per share.
Both FCFF and FCFE can be calculated starting from net income or cash flow from operations, depending on the available information. However, it is not recommended to directly use earnings components such as net income, EBIT, EBITDA, and CFO as cash flow measures for valuation, as they may either count certain parts of the cash flow stream twice or exclude them.
The expressions for FCFF and FCFE valuation can be adjusted to accommodate complex capital structures and preferred stock. Two-stage models are commonly used, assuming either a constant growth rate or a decline in growth in the first stage followed by a sustainable growth rate in the second stage.
To forecast FCFF and FCFE, analysts develop models based on sales forecasts, with profitability, investments, and financing derived from changes in sales. Three-stage models are often employed to approximate cash flow streams that fluctuate from year to year.
Non-operating assets, such as excess cash, marketable securities, and nonperforming assets, are treated separately from operating assets in the valuation process. They are valued independently and added to the value of operating assets to determine the overall firm value.
The passage explores important valuation indicators used in professional practice and their practical applications. It discusses the concept of price multiples, such as price-to-earnings (P/E), price-to-book (P/B), price-to-sales (P/S), and price-to-cash flow, which are commonly employed to assess the value of stocks. These multiples are often used in the method of comparables, where they are compared to benchmark values derived from similar companies or industry averages. The law of one price justifies the use of comparables in valuation.
The passage also delves into the use of P/E ratios, considering both trailing and forward earnings, and addresses the issue of cyclicality by normalizing earnings. It introduces the notion of earnings yield as the inverse of P/E and mentions the PEG ratio, which incorporates earnings growth, as a means of evaluating attractiveness.
Furthermore, the passage discusses book value per share as an indicator of shareholders' investment but notes potential challenges due to factors like inflation and accounting distortions. The fundamental drivers of P/B ratios are identified as return on equity (ROE) and the required rate of return.
In addition, the passage highlights the importance of the price-to-sales ratio (P/S), emphasizing the stability and comparability of sales figures. However, it acknowledges that P/S may overlook differences in cost structure and can be subject to manipulation through revenue recognition practices. The drivers of P/S ratios include profit margin, growth rate, and the required rate of return.
The concept of enterprise value (EV) is introduced as a comprehensive measure of company value, and the EV-to-sales ratio is proposed as a more suitable alternative to P/S for comparing companies with different capital structures.
The passage also explores the use of price-to-cash flow multiples, which are considered more stable than P/E ratios. It discusses various cash flow measures, such as earnings plus noncash charges, cash flow from operations, free cash flow to equity, and EBITDA, in relation to price-to-cash flow ratios. The drivers of this ratio are identified as the expected growth rate of future cash flow and the required rate of return.
Additionally, the passage touches upon other valuation indicators, including EV/EBITDA, dividend yield, momentum indicators, relative strength, and the use of screens in stock selection.
Overall, the passage provides an overview of different valuation indicators, their rationales, and considerations for their practical application.
Real estate investments encompass different forms, including direct ownership (private equity), indirect ownership through publicly traded equity, direct mortgage lending (private debt), and securitized mortgages (publicly traded debt). Investing in real estate income property offers various motivations, such as current income, price appreciation, inflation hedge, diversification, and tax benefits.
Incorporating equity real estate investments into a traditional portfolio can bring diversification benefits due to their imperfect correlation with stocks and bonds. Equity real estate investments may also serve as a hedge against inflation if the income stream can be adjusted accordingly and real estate prices rise with inflation. Debt investors in real estate primarily rely on promised cash flows and typically do not participate in the property's value appreciation, similar to fixed-income investments like bonds.
The performance of real estate investments is influenced by the underlying property's value, with location playing a crucial role in determining its worth. Real estate possesses distinctive characteristics compared to other asset classes, including its heterogeneity and fixed location, high unit value, management intensiveness, high transaction costs, depreciation, sensitivity to the credit market, illiquidity, and challenges in determining its value and price.
A wide range of real estate properties is available for investment, with the primary commercial categories being office, industrial and warehouse, retail, and multi-family. Each property type exhibits varying susceptibilities to risk factors, such as business conditions, lead time for development, supply and demand dynamics, capital availability and cost, unexpected inflation, demographics, liquidity, environmental considerations, information availability, management expertise, and leverage.
The value of each property type is influenced by factors such as its location, lease structures, and economic indicators like economic growth, population growth, employment trends, and consumer spending patterns.
Real estate appraisers utilize three primary valuation approaches: income, cost, and sales comparison. The income approach involves methods like direct capitalization and discounted cash flow, which consider net operating income and growth expectations to determine property value.
The cost approach estimates value based on adjusted replacement cost and is employed for properties with limited market comparables. The sales comparison approach determines property value by examining the prices of comparable properties in the current market. When purchasing real estate with debt financing, additional ratios and returns such as the loan-to-value ratio, debt service coverage ratio, and leveraged and unleveraged internal rates of return are considered by debt and equity investors.
Publicly traded real estate securities encompass various types, including real estate investment trusts (REITs), real estate operating companies (REOCs), and residential and commercial mortgage-backed securities (RMBS and CMBS).
REITs, in particular, offer higher yields and income stability compared to other shares. Their valuation can be approached through net asset value due to active private markets for their real estate assets. REITs provide tax exemptions but have less flexibility in real estate activities and reinvesting operating cash flows. Factors considered when assessing REIT investments include economic trends, retail sales, job creation, population growth, supply and demand dynamics, leasing activity, financial health of tenants, leverage, and management quality.
Adjustments are made to financial statements to obtain accurate income and net worth figures.
Valuation methods such as funds from operations, adjusted funds from operations, dividend discount models, and discounted cash flow models are commonly used.
Private equity funds aim to generate value through various methods, including optimizing financial structures, incentivizing management, and implementing operational improvements.
Unlike publicly traded companies, private equity consolidates ownership and control, which is seen as a fundamental driver of returns for top-performing funds. Valuing potential investments poses challenges due to their private nature, requiring different techniques based on the nature of the investment.
Debt financing availability significantly impacts the scale of private equity activity and influences market valuations. Private equity funds operate as "buy-to-sell" investors, focusing on acquiring, adding value, and strategically exiting investments within the fund's lifespan.
Proper exit planning plays a critical role in realizing value. Continuously valuing the investment portfolio poses challenges as market values are not readily observable, necessitating subjective judgment.
The primary performance metrics for private equity funds are internal rate of return (IRR) and multiples, but comparing returns across funds and asset classes requires careful consideration of factors such as cash flow timing, risk differences, portfolio composition, and vintage-year effects.
Fundamental analysis relies on industry and company analysis, which involve various approaches to forecasting income and expenses. These approaches include top-down, bottom-up, and hybrid methods. Top-down approaches start at the macroeconomic level, while bottom-up approaches focus on individual companies or business segments. Hybrid approaches combine elements of both.
When it comes to revenue forecasting, analysts can use different techniques. One approach involves forecasting the growth rate of nominal GDP and the industry/company's growth relative to GDP growth. Another method combines forecasts of market growth with predictions of the company's market share in specific markets.
Positive correlations between operating margins and sales indicate economies of scale within an industry. Certain items on the balance sheet, such as retained earnings, are directly influenced by the income statement, while accounts receivable, accounts payable, and inventory closely relate to income statement projections. Efficiency ratios are commonly utilized to model working capital accounts.
Return on invested capital (ROIC) is a profitability metric that considers taxes and compares net operating profit to the difference between operating assets and liabilities. Sustained high levels of ROIC often suggest a competitive advantage.
Competitive factors affect a company's ability to negotiate input prices with suppliers and adjust product or service prices. Porter's five forces framework helps identify these factors. Inflation or deflation impacts pricing strategies, taking into account industry structure, competitive forces, and consumer demand characteristics.
When a new technological development introduces a product that may impact the demand for existing products, analysts can estimate the effect by combining a unit forecast for the new product with an expected cannibalization factor.
The choice of the explicit forecast horizon depends on factors such as the projected holding period, portfolio turnover, industry cyclicality, company-specific considerations, and employer preferences.
Analyst forecasts can be influenced by behavioral biases like overconfidence, the illusion of control, conservatism, representativeness, and confirmation bias. These biases can have an impact on the accuracy of the forecasts.