Risk Assessment at the U.S. Securities & Exchange Commission

by

Craig M. Lewis

Chief Economist and Director, Division of Risk, Strategy, and Financial Innovation
U.S. Securities & Exchange Commission

Risk Minds USA Conference
June 5, 2012

Thank you so much for inviting me here to speak with you. This is the second time I've had a chance to participate in a Risk conference and am honored to be invited back. Before I begin my remarks, I must make clear that the views I express today are mine alone and do not necessarily reflect the views of the Commission or of my colleagues on the Commission Staff.

I would like to focus my remarks on several exciting initiatives led by my Division — the Division of Risk, Strategy, and Financial Innovation — that are currently underway at the SEC. These initiatives will inform our broader approach to risk assessment and I hope will change how the SEC approaches several of the monitoring programs essential to its mission for years to come.

The Division of Risk, Strategy and Financial Innovation, or "RSFI", was formed, in part, to integrate rigorous data analytics into the core mission of the SEC. Often referred to as the SEC's "think tank", RSFI consists of highly trained staff from a variety of academic disciplines with a deep knowledge of the financial industry and markets. For example, we currently have over 20 PhD financial economists as well as several dozen other experts, including individuals with decades of relevant industry experience. RSFI staff is required to be very creative as we are involved in a variety of projects across Divisions and Offices and I believe that RSFI approaches regulatory issues with a uniquely broad perspective. Moreover, RSFI is closely involved with parties and issues outside of the SEC. For example, we work with outside industry and academic experts to inform the Commission about current market trends and innovations and their potential impacts on financial regulation. And significantly, my staff pursues active research agendas, including contributing to peer-review publications and regular engagements with industry experts, in order to remain engaged with current research methods and to maintain our expertise in cutting edge approaches and methodologies.

Let me now turn to the specifics of RSFI's Office of Quantitative Research (OQR) which was created to develop custom analytics intended to inform monitoring programs across the SEC. The best way to illustrate OQR's role at the Commission is by a concrete example. Recently, OQR staff developed a model used by the Division of Enforcement's Asset Management Unit. For that project, OQR, together with the Office of Compliance, Inspections and Examinations (OCIE), developed an analytical model that uses performance data to identify hedge fund advisers worthy of further review by either OCIE or the Asset Management Unit. In addition, OQR staff provides ongoing analytical support to and performs ad hoc research at the request of OCIE and Enforcement. This initiative has been extraordinarily successful. For example, the initiative resulted in 4 recent cases (Michael Balboa, ThinkStrategy, Patrick Rooney and LeadDog)1 and the analytical tools have provided useful insights to OCIE staff during the examinations process. I believe that this project successfully demonstrates the value of the coordinated application of analytics across Divisions and Offices and showcases how RSFI can invest in other analytical projects in the future.

Let's step back and consider a general regulatory monitoring program. Registrants are responsible for running a business subject to specific requirements. Regulations define the parameters of that space and the rules for operating within it. At the same time, the regulator is responsible for monitoring the industry and the businesses within it. In the case of investment advisers, for example, there are fiduciary duties to uphold and client assets to manage within the limitations described by the Investment Adviser's Act. And the regulator responsible for verifying compliance must decide which advisers to examine, how often, and for each examination, issues the areas of focus. Since the investment advisory business varies so much across the registrant population, each examination could vary substantially from one adviser to another or across regions.

The objectives that a robust examination program must address are threefold: (1) develop an examination model that is sensitive to registrants' business models, (2) efficiently inform examination staff of risky areas of interest within the exam, and (3) develop an ongoing process for learning from the exam. Skipping ahead to the third requirement, the process should incorporate a feedback loop that is flexible enough to accommodate new information as exam staff learn about market trends and gather exam related information, but consistent enough for examiners to compare and learn from specific examination results across regions and over time. I think feedback is a key feature of a quantitative approach to monitoring and examining registrants. It must reflect the inherent tradeoffs between updating the model as shortcomings are identified and the need to maintain a consistent analytical framework that avoids the need for examiners to "retool" each time there is a system modification.

Returning to the first two objectives of a robust process I just mentioned of creating an examination model that is both sensitive to registrants' business models but also informs exam staff of high risk areas that warrant further examination, I believe that any analytical approach should be tailored to address two primary questions. First, how does a particular firm compare to other firms in the same business? And second, using the information we have on the full space of registrants, can we identify registrants that can be considered outliers relative to other registrants in the same line of business? Answering the first question informs any future examination or investigation of the registrant. For example, if the registrant is suddenly mentioned in the press the metrics could lend perspective to any media discussion. Using the analysis to generate a ranked list of outlier registrants would answer the second question.

If a registrant has observable characteristics that are measureable using data available to regulators, a process can be established for evaluating that registrant (and all similar registrants) within a consistent framework. Analytics can then help evaluate the registrant in the most efficient and targeted way. For example, analytics give examiners and investigators a baseline from which to establish the underlying fact pattern that caused the registrant to be identified as high risk. One can think of an analytically informed monitoring program as a process that ends with a ranking system that compares the analytical characteristics of a registrant relative to similar registrants. With this information readily in hand, inspection and examination teams can then make subsequent determinations that best reflect their program's priorities.

Any discussion of setting up an analytical delivery process must also consider how errors can be efficiently managed. Since, at the highest level of generality, analytical models are partial descriptions of complex phenomena, our general approach relies on models to characterize registrants within the appropriate peer-group or registrant space. If this approach is used to generate a schedule of regular examinations or investigations, for example, the staff must allocate resources to manage errors. "False positives" result when the analysis identifies a pool of registrants as high risk when the true risk of the registrants is low. False negatives result when the analysis identifies the registrants as low risk when the true risk of the registrants is high. One of the challenges of adopting an analytical approach is evaluating the pool of registrants when a portion of that pool may consist of false positives. Simultaneously, a portion of the registrant population that was not identified as high risk consists of false negatives, i.e., registrants who actually should be marked as potentially high risk. While methods have been developed to mitigate these types of error, they cannot be eliminated.

The risks associated with false negatives and false positives are not the same. It should go without saying that failing to detect a high-risk registrant can have more serious consequences than erroneously identifying a low-risk registrant as warranting further review. There a couple ways to reduce the possibility that the models produce false negatives. One alternative is, of course, to increase staff to undertake more examinations. But another alternative is to build a better mousetrap by expanding the set of risk metrics one considers. Models are not omniscient, of course. Indeed, the possibility always exists for a registrant to conceal nefarious behavior or even to attempt to mimic the statistical properties the regulator verifies. But as underperforming registrants struggle to make themselves attractive to investors, it becomes increasingly more difficult to avoid detection by empirical methods. The value of an analytical approach is not that it eliminates the possibility of nefarious activity, but that a consistent approach can make it more difficult for such activity to persist without any indication.

Part of the support RSFI provides to other Offices and Divisions includes transparently discussing model limitations and making recommendations for alterations to those models. RSFI also assists in managing error rates: a process that will only improve as staff becomes increasingly familiar with both the analytics and the registrant space. RSFI also actively encourages Offices and Divisions that rely on analytics to develop internal processes within their programs to define procedures for how to use analytics. These processes should reflect the priorities of the Office or Division and should include mechanisms for monitoring the effectiveness of both the analytics and the decisions based on the analytics. Indeed, as much as I think that incorporating these analytics will necessarily improve the SEC's risk assessments, I believe that models should never replace the exercise of judgment by the SEC's expert staff but rather be a vital tool to assist in their decisions.

While I've focused much of this talk on examples of a few of the SEC's ongoing data-driven risk assessment projects, RSFI also engages in broad-ranging qualitative risk assessment projects. (Though, I'll note that the distinction between quantitative and qualitative risk assessment is quite flexible.) RSFI has an Office of Risk Assessment staffed with industry professionals who, among other things, reach out to market participants and other regulators on an ongoing basis. This Office works with others at the SEC to integrate current industry perspectives and intelligence into projects and initiatives across the Commission. For example, emerging industry issues or expected industry response to regulatory changes are a source of valuable insights to risk assessments initiatives across the SEC. For example, recently, staff reached out to other regulators and industry participants to understand their perspectives on the potential implications of European economic distress. Staff in this office regularly participate in examinations and consult with the rulemaking divisions to ensure policy-makers are informed about industry perspectives. Though we refer to this type of fact-gathering as "qualitative" risk assessment it is closely related to the analytical development of the quantitative risk assessment tools I described earlier. Different perspectives inform our processes in both spaces—staff engaged primarily in the qualitative risk assessments use the quantitative information and expertise of their RSFI colleagues, while the quantitative staff discuss their approach and methods with qualitative staff and others within RSFI. The goal is to increase the broad knowledge base of RSFI and to develop applications informed across that knowledge base.

I am incredibly proud to be part of so many exciting risk assessment initiatives at the SEC. I hope I've been able to give you a brief glimpse into at least a small portion of what the SEC is doing to ensure that we remain a nimble and sophisticated regulator. Thank you again for inviting me to speak with you today and I'm happy to take questions.

1 http://www.sec.gov/news/press/2011/2011-252.htm