Showing posts with label 33-9002. Show all posts
Showing posts with label 33-9002. Show all posts

01 August 2013

Why Worry?


In 1985, Dire Straits released their their iconic album Brothers in Arms. While there are so many great tracks on that album, I keep finding myself singing "Why worry? There should be sunshine after rain, these things have always been the same, so why worry now". (You can listen in here)

So, sunshine after rain.

What on earth does this have to do with XBRL? 2nd quarter filings are flowing in to the SEC, including the associated XBRL versions of the filings. And what is notable about these filings? These filing represent at least the ninth XBRL filing for all SEC filing companies other than those spared by the inability of the IFRS to create a taxonomy acceptable to the SEC (although that really is a different, sadly disturbing, topic). That means that all filers have now left their period of "litigation relief".

The SEC was, of course and as always, very clear - there is no mandated requirement for XBRL to be audited. After all, Chairman Cox was famously worried that requiring audit or XBRL would spell "crib death" for the XBRL project. How right he was, yet how wrong at the same time.

After all, who would be supportive of the additional burden of having to pay for production of XBRL, and then have to pay for an audit of the XBRL. 

This left the SEC with a small problem when it came time for the Final Rule. How to require a new reporting format from filers with one of the stated purposes being to improve the quality of information provided to the investing community - while at the same time keeping to the promise to not require an audit of the XBRL?

The answer; give filers two years to learn how to get their XBRL right, then make the XBRL carry the same legal liability as the HTML filing, but without a requirement for an audit.

Here is the problem; the two year window did not give enough time for the development of a deep enough reservoir of resources that actually understand XBRL, and the development of enough skilled individuals within companies to produce XBRL that is "the same" as the HTML. Certainly not to any meaningful level of confidence.

Now CFOs should be asking themselves "Why Worry?" Our providers know XBRL, shouldn't they, even if we don't? Right? And the SEC isn't really serious about this stuff, are they?

Too many filers simply do not have the resources to have dedicated XBRL expertise in-house. And the increase in total XBRL resources required to support production of XBRL filings has grown by over 100 times in five years. So, in 2009, for every 1 "XBRL resource equivalent unit" then we now need almost 100 today. (I'm inventing a new unit of measure here - the XBRL resource equivalent unit (the XREU, but I promise to avoid using that). The XREU is a unit of measure equal to the person resources, trained in XBRL, required to produce the equivalent of a first year "block tagged" XBRL filing. So I'm assuming for simplicity that each filer required 1 unit equivalent for their first year of filing. I am also, from personal experience and wide discussion, that “detailed footnote tagging” (DFT) represents three times the first year effort, and therefore equals 3 XREUs. Finally, I am also assuming that assurance over the XBRL requires 1 XREU per filer. I know, a very blunt instrument of an estimate, but let’s go with it for a moment.)

So in 2009, we needed 500 XBRL resource equivalent units. And for estimation purposes only let's imagine that detailed tagging requires three times the effort, so in the second year, production of XBRL required the original 500 units (for the first filers) plus 1500 units for their detailed tagging requirement, plus another 1000 units for the second wave of filers.

For simplicity the following chart shows the growth in required XBRL resource equivalent units.


Oh, and just in case, I've also added the equivalent of 1 XBRL resource equivalent unit per filer for assurance starting in each filer's third year - the year they leave their litigation relief.

So why worry? Because a close look at the chart above shows a growth from 500 XBRL equivalent units in 2009 (basically, one filer's basic tagging of face financials equals 1 XBRL equivalent unit in this analysis) to over 50,000 XBRL resource equivalents this year; 2013.

Most CFOs, even if they have not see the chart above (or one like it) instinctively know that they are exposed. There simply has not been the time to train and retain the skill sets and individuals required to actually produce over 50,000 XBRL resource equivalent units. The additional small problem is that this pool of resources are needed simply to meet US SEC registrants only. There are many other XBRL implementations around the world, including India, that are competing for the few existing and established resources.

And knowing that the resources simply are not there, then it is only prudent of those CFOs to worry about their XBRL, and the liability that they now carry.

Why worry? There should be sunshine after rain. But for now, it is still raining brand new, freshly minted XBRL resources.

Put bluntly - the CFO that does not worry about their XBRL is a worry.

We are a year or two away from sunshine.


03 June 2011

SEC's askOID - 2 Thumbs Up

When the SEC established the Office of Interactive Data (their way of creating a completely misleading name for XBRL) they also put together a team of people to answer questions about XBRL issues. In the time since, the askOID@sec.gov e-mail address has received (I had hoped to put a range in here, but they told me they do not provide that information - but I'd guess hundreds, maybe thousands) of  e-mails from a wide range of individuals and companies from around the world.

I have, on more than one occasion, when baffled or simply wanting to ensure I have exactly the right answer, sent e-mails to askOID@sec.gov. The responsiveness has always impressed me. In addition to a very rapid response, the information has always been precise. Sometime too much so for me, and if I have had follow-up questions, in almost all cases either the answer arrived (again, quickly) or the offer was made to call we and walk me through the question and answer.

Sometimes the answer has simply been a pointer to a paragraph in the rule, other times to other material on the SEC's website. On the occasions when we spoke, it was invariably to clarify my poorly constructed question(s), and always resulted in the clarification that I needed.

With an additional 8000+ filers coming on stream this summer, I expect the e-mail stream is already heating up. Hopefully they will be able to maintain the same level of support that they have to date.

The SEC has also recently established an online form for questions, located at: http://www.sec.gov/cgi-bin/contact_risk_fin/. The online form does provide a structured environment for entering questions, which should help OID process the requests for information. They also provide a phone number and new e-mail address for technical questions. "For technical support or questions on the Interactive Data Chapter 6 of the EDGAR Filer Manual, please call (202) 551-8900 option 3 during normal operation hours (9 AM to 5:30 PM EST) or email webtech@SEC.gov."

Of course, there remains at least one "oxygen thief" at the SEC and dare I say it, within their Interactive Data program, but the folks that answer the askOID@sec.gov are certainly not in that category.

So two thumbs up to the SEC Office of Interactive Data and the people receiving and responding to the askOID@sec.gov e-mails.

23 May 2011

Assurance over XBRL: the potential cost for American business

Subtitled: A cautionary tale

Since the early days of 1999, and in the eleven years since, the accounting profession has invested hugely in XBRL, and as we all know, accounting firms are a public good, so this investment has had the purest goals of enabling greater transparency and internal company process improvements. And in that spirit, I have no doubt that they will be doing everything in their power to reduce the marginal incremental cost of assurance over XBRL to a net increase of $0 over current audit costs.

It is merely a byproduct of that altruistic concern that will result in potentially (again subject to some very questionable assumptions - see below) anywhere up to an annual $100 million in additional revenue to each of the Big-4 (although possibly "only" up to $30 million from the first wave of filers).

The cautionary message is this: unless the assurance profession finds a way to ensure that the marginal incremental cost of assurance over XBRL is $0, XBRL will be seen as simply another cost burden being placed on American business. Continuing to pile cost onto the producers of the XBRL while delivering the benefits to the users, will increase resistance and endanger businesses acceptance of this new filing requirement.

Lets be absolutely clear - a cost-benefit analysis requires that both the costs and the benefits be defined and measured, and then balanced to confirm a net benefit (and thus support for the proposition) or a net cost (which will only increase resistance).  If we do a societal cost benefit, then we can balance individual company costs against societal gains, but it still needs to be quantified.

Background

The SEC, in the XBRL mandate, provided litigation relief for the first two years of production and provision of XBRL with 10Q and 10K filings. At the end of that two year window, filers will no longer have that protection. Does this mean that there is a requirement for the XBRL to be audited? No. Does this mean that companies may be held liable for investment decisions or adverse changes in share values due to erroneous information making its way to the investing community via the XBRL? The answer to that is "the jury is out". Well, actually, since no case has been taken yet, there is still no jury, so maybe that's no the right answer.

In my most recent post, I provided a set of assumptions, and from those assumptions developed a range of potential cost to American businesses to provide XBRL to the SEC.  I did not include the cost of audit in that calculation. I'm attempting to do so here.

Assumptions

A reminder, as in my previous post, what follows are assumptions, and reader should accept or change them to fit their own views and expectations. I welcome a different set of assumptions and a different set of expected costs.

  1. Virtually all of the top 1500 largest filers are audited by the Big-4, and that that those 1500 are 'evenly' distributed across the 4. (I know they are not, but for the sake of basic assumptions...)
  2. Today the only assurance that can be provided over XBRL is through Agreed Upon Procedures - not an audit and not external assurance.
  3. The cost of assurance over block-tagged XBRL ranges from $25,000 - $50,000, a range that reliable sources have told me is 'conservative'.
  4. Assurance professionals, as part of their assurance process, need to look at each extension and confirm that there was not an equally valid existing taxonomy element that could have been used.
  5. Extension elements (in first year "block tagged" XBRL) are running in the 10s, not the 100s in block tagged XBRL.
  6. The second year of XBRL production brings in the requirement for block tagging, which is increasing the number of reported elements by up to a factor of 10.
  7. Extensions in block tagged XBRL are running in the 100s. In one (outlier) case I counted over 700, in another (randomly selected) the number of extensions went from around 10 to over 200.
  8. External assurance, for the same subject matter, generally required more work and quality assurance than AUP internal assurance.
  9. Auditor risk will be significantly higher for external assurance over XBRL than for AUP assurance.
  10. Auditors will have identified process improvements that will reduce a "cost per element", so to speak, or assurance.
  11. The total effort to audit will be at least 10x the first year (when assurance was voluntary), due to more complex XBRL and a massive increase in extensions.
  12. Total effort (big assumption, please feel free to modify) will therefore be 8x the AUP (10x the XBRL, more QA, improved processes).
  13. In all cases, I'm rounding down for additional conservatism.

So, with that behind me, what costs $25,000 today will only cost $200,000 when assurance is required, or should I say, when filers and auditors no longer have litigation relief. And by the same logic, the higher end could easily reach $400,000. But if we settle on an average of $300,000 to provide external assurance over the XBRL, we then have something to extrapolate.

So lets take an average of 360 year-1 and year-2 filers per Big-4. Lets look at $300,000 each. Pretty soon we have over $100 million in additional audit revenue per in 2013. The firms themselves are gathering the information that will validate (or otherwise) these estimates right now, as the first wave of filers leaves the safety of the SEC's two year umbrella.

So some counter arguments

These are pretty wild guesses. What are some of the arguments against these figures? Again, I'll simply list these as if they were assumptions.

  1. If XBRL really is built into the consolidation process, then there will be less need to actually audit the XBRL. The good news is that this may well be the case, and as such, consolidation software companies could well be selling systems implementations or upgrades purely on the cost saving of reducing the (coming) audit costs. "Spend more money with us or you will spend even more money with them".
  2. The SEC might extend the litigation relief period - but I wouldn't plan on that. In fact, while the CCR did ask for that, with the re-nomination of Commissioner Aguilar, I'm pretty confident they won't get an extension.
  3. My numbers could be completely wrong. Well, as with all assumptions, that's a given. The issue is not are the right or wrong, the question is "what are your estimates of the cost based on your assumptions?"
  4. The assurance profession will identify mechanisms by which the cost of assurance over XBRL can be slashed.

What is happening?

For almost six years various working groups of XBRL International and the AICPA have focused on defining how assurance can be provided over XBRL. Some progress has been made. Yes some sticking points remain. In April 2009 the AICPA released Statement of Position 09-01, "Performing Agreed-Upon Procedures Engagements that Address the Completeness, Accuracy, or Consistency of XBRL-Tagged Data". To date that remains the primary guidance for assurance over XBRL.

The IAASB continues to put XBRL in the "later" basket, although I'm hearing that increased interest in being shown.  The PCAOB, official auditing standards setter these days, released a set of Q and A on XBRL in 2005, and to the best of my knowledge they have not updated it since.  

Credit should be given to the AICPA, which has been taking the lead, and with the major firms all involved with their task force on assurance over XBRL. Amy Pawlicki at the AICPA has taken on the difficult task of chairing the XBRL International Assurance Working Group, and has not been shy about pushing participants to advance the goals of the Working Group.

I also have no doubt that the major auditing firms have developed, or are developing, software that will automate the vast majority of the checks that auditors are required to perform. Robust automation can radically reduce the overall workload required to assure XBRL. The residual concern is that the firm might not see any benefit in reducing the cost.

In summary, much is happening, and hopefully guidance will be made available that will reduce the marginal cost of assurance over XBRL to a $0 incremental additional cost to filers.

A disappointment

Which leads me to a little disappointment that I have. After my previous post on putting a cost on XBRL, instead of anyone providing an alternative range of cost, the majority of responses (public) were to say that my assumptions were wrong and that I do not factor in the benefits. Or as one person put it: "An analysis that considers the costs without the benefits does not seem like it is very balanced."  My response was, and remains "An analysis that considers the benefits without the costs does not seem like it is very balanced." So while advocates of XBRL continue to say just how wonderful XBRL will be (especially if you are a very big company with a lot of money to spend), none of them seem willing to admit that there is a cost to implementation of XBRL.

Recommendations

1. The audit profession must demonstrate how it is driving down the cost of providing assurance over XBRL.

2. I have a difficult time quantifying the benefits of assurance over XBRL, but I am confident that some in the assurance community, and certainly those who have been advocates form XBRL for a decade should by now have quantified the benefits. Please share that information. But it should be quantified.

3. Those who disagree with my analysis should provide a counter analysis that documents their assumptions and the resulting calculated costs to balance against the benefits that they quantified in number 2 above.

4. If you are a filer, ask your auditor how much assurance over XBRL will cost. Demand at least a range and estimate (if you are a 2nd or 3rd year filer).
5. Send me any information you can on what it is costing you or what you have been quoted - I will not post or in any other way allow identification of you, your company or your auditor.

19 May 2011

Time heals? Assurance avoids

Broc Romanek, editor at the Corporate Counsel has just posted an extract from the WSJ article, announcing the White House's plan to nominate two commissioners to the SEC. The first name should be well known to the XBRL community. Commissioner Luis Aguilar was the only commissioner to vote against the XBRL final rule.

We should remember why.

Commissioner Aguilar supported the goals and objectives of the SEC's Interactive Data (XBRL) program, and supportive of the proposal put forward, except for one aspect - the lack of an assurance requirement. Of course, with a 4:1 split on the vote, he could safely vote against the Rule without endangering its passage by the Commission.

But the rationale for his vote should be considered by anyone who thinks that the SEC might provide an extension to the two year liability relief provision of the Rule.  In December 2008 I wrote (and quoted from Commissioner Aguilar's comments):

Only Commissioner Aguilar had the courage to vote against the rule, declaring that this was the first time in history he has seen the SEC weaken protections for investors:
I am not prepared to reduce the level of protection that I believe investors are entitled to. Using new technology to improve disclosure is a good thing — but not when it dilutes investor protection. In these times of market turmoil, investors need to know the SEC is looking out for them.
Let me quickly say that I have always been, and remain, deeply convinced that XBRL can and will revolutionize business reporting, both internal and external, and that XBRL has the ability to deliver incredible efficiencies across the business reporting supply chain. And let me add that, in the long run, the SEC’s action last Wednesday represents a major step forward toward the full implementation of XBRL for financial reporting in the United States.

The good news is that two years have passed, and the first wave of filers (the group-1 or phase-1 or year-1 or whatever) are now leaving the protective covering of that litigation relief, and will now be liable for the content of their XBRL.

Lets look again at Commissioner Aguilar's final comment:

"It departs from our best traditions,and shackles investors with the risks and costs arising from errors and misstatements in interactive data, even though issuers control the process of preparing the disclosure and are in the best position to ensure its accuracy and reliability."

Returning Commissioner Aguilar to the SEC will be good for the SEC, good for XBRL, and I am very pleased to see his re-nomination.

Equally, returning Commissioner Aguilar should serve as fair warning that it will be very difficult to get an extension to the liability relief through the Commission, and no filer should expect to see such a deferment.

Time, they say, heals all - and indeed the XBRL world has had two years to figure out how to provide assurance over XBRL. But at what cost. Stay tuned...

09 May 2011

XBRL: The futures so bright, we should put away the Kool-Aid

Timbuk3 sang a song called “The futures so bright I gotta wear shades”. There really is no better way to describe the potential future of XBRL. I say potential because like all future’s, we will be part of making that future. But making that bright future first requires us to be honest about the present and the potential. XBRL will not bring world peace, and is not the best solution for many of the problem statements that XBRL says it will solve. The market has had a decade to explore and compare XBRL against other solutions to these problems, yet for some reason has not used XBRL to achieve those benefits.


It is time to acknowledge that there are problems for which XBRL simply is not the solution. Maybe it is time for us to recognize that there actually are better, cheaper, fast solutions for some problems.


That is not to say that there are not problems for which XBRL is the best solution, but more on that later.


So think of this article as me standing up in front of the world and saying “I am an XBRL-holic. I’ve drunk the Kool-Aid for too long. I know what it tastes like, but I’ve also come to know its limits”.


Forget the Myths


I have had on-again off-again conversations over the past few years with ERP vendors or advocates. My question to them is usually goes along the lines of “There’s this really interesting article on the potential of XBRL, can you take a look please and let me know what you think?” Thankfully I’ve yet to have any of these people come back and say “XB-what?”


All acknowledge the advantage of a vendor independent, open standard for information interchange. They understand and support the opportunities for semantic interoperability that XBRL enables, again from a vendor independence perspective.


But every one of them also has been scathing about the claims for XBRL to transform internal reporting and internal processes. Each of them has labeled the identified problems as being symptoms of poorly implemented ERP systems or ineffective processes. Each told me that business process re-engineering, while no longer the consulting product or phrase de-jour, is still the best way to gain the internal benefits promised by XBRL, and at a far lower price than the custom development exercises that current XBRL implementations require.


When I talk with them about the ability to use XBRL for provision of information that has accuracy built in, there is grudging acceptance, but only when that information is being provided to or coming from an external party. Normalization of reported elements in an external financial reporting environment also gains some support. The most support comes from the idea of an open standard that provides a boundary standard for provision of information between parties where the structure and content is not mandated by a form.


But while they have been almost unanimous their view that while XBRL is a valid and potentially effective standard for such data exchange, not one said that it is or would be effective for data analysis. All said that the data would first need to be converted into another format – vanilla XML, Excel, CSV, SDR or other format to allow for faster processing, storage and usability by humans or computers. Even the FDIC, which uses XBRL to collect Call Report data from banks in the United States, provides the output in three formats. I do not know of any entities that take the XBRL feed from the FDIC, but I am sure there are some.


I have heard of at least one XBRL project in which XBRL information was sold as a data-quality solution that could be used within a company for client and prospect analysis. My understanding is that the users basically said “Thanks, but we really don’t want to train our people in a whole new standard – can you give us that in Excel?”


We also hear a lot about XBRL for Internal Audit. Apparently XBRL will revolutionize consumption and analysis of data from GL or accounting systems or other data sources. This is a great dream, especially to me, a former Internal Auditor. Yet it is also a solution to the day before yesterday’s problem. Yesterday (well, for the last 15 years or so) data analysis products have been built for internal auditors (and external auditors) that will accept data from almost any proprietary GL or accounting system. These systems can then import the data and run a couple of decade’s worth of pre-packaged and developed algorithms, formulas and calculations on that imported data. (Do you want to run a Benford's Law calculation across a 100,000++ records? Well, just push a button and it's done.)


Some of these products even built the capability to import XBRL, but I could not say how many users actually are using that capability. In one case, the application can import XBRL GL, yet as there have been zero users, the company stopped supporting the standard for data consumption “years ago”. XBRL GL might have had a future a decade ago, but I now believe that XBRL GL serves only two purposes. 1) to provide a bookend to the Business Reporting Supply Chain slide that is almost obligatory when XBRL presentations are made - XBRL GL “proves” that XBRL has a place, independent of existing ERP or accounting systems, virtually from the point of transaction entry/creation. 2) to serve as a test bed for development of taxonomies for business operations. After a decade, how many ‘real’ XBRL-GL implementations are there in the world?


So, now that I have that all off my chest, we can move on to the benefits of XBRL, and my predictions, and what is the glorious future of XBRL – “so bright I have to wear shades”?


Chickens and Eggs


The XBRL Chicken and Egg problem has always been the lack of software, driven by the lack of data to use the software, there being too little data because there is not enough software that can cost-effectively create XBRL data, so there is not enough data to demonstrate the analytic power of semantically consistent data, so it has been difficult to build a business case to build the software to exploit non-existent data. And around and around we go.


The Chicken and Egg problem is coming to an end. While there have been a number of very effective XBRL implementations around the world, in most cases the data collected in XBRL is not actually made available to the consuming public. The SEC’s program is creating that giant pool of data. The entire lack of data problem (the Egg, or the Chicken?) is in the process of disappearing. Therefore we can expect two great leaps forward over the next couple of years. Note I said a couple of years. This is not today, and certainly not yesterday. This is in the future…


First, as millions of data-points of accurate information are made freely available, developers are beginning to understand the potential uses of that data, and the potential applications that can add value to that data. This is already happening, and we are seeing the first products coming to the mass market. This will accelerate.


Second, as production of XBRL becomes an assumed requirement for accounting and financial reporting systems, we will see XBRL being built into almost all systems. With XBRL being built in as an output format, it is only a small step (okay, not really) to allowing the linking of any input data to a corresponding element within a company specific, industry, national or international taxonomy. Even better, it should become possible to link any input data to corresponding elements in multiple taxonomies.


What will this allow?


The market, with enough data-sets and individual XBRL-tagged facts, will (note I’m still using the future tense, though that becomes the present within months, not years) confirm to the software industry that there is enough data, and therefore probably enough demand, to support a business case for new software development.


It will allow (again, future tense) listed companies to not only provide their own financial statements on their websites, but to produce low-cost, high quality analysis of themselves and their peers/competitors – basically self produced market analyst coverage. Companies will also have the ability to provide the type of analysis that will enable a visitor to actually “play” with the data and run comparisons directly from the investor relations screens on the company’s website. Talk about being able to “tell your story, your way”.


More regulators will (some already do – present tense) be able to perform rapid and cost effective analysis of companies, identifying the outliers sooner than the market, such that regulators may intervene before investors suffer massive losses from fraud or business failure. This will enable regulators to more cost effectively achieve their mandates of protecting the markets and investors.


High quality, free data, provided directly from the regulator, will force the data aggregators to improve the quality and range of services built-into their data offerings. Why? Because is the data is free, the value is in the added services, not in selling the data. Anyone will be able to download the data for free. At a guess, the Google and Yahoo Finance online services will be adding additional capability, if they are not already – enabling the side by side analysis of multiple companies, at little or no cost to the retail investor.

So in this not-too distant future we have XBRL being produced as a natural output of most accounting systems, and we have inexpensive tools for consumption and analysis of XBRL. This opens the market to the entire commercial banking systems around the world to request XBRL versions of financial statements, radically increasing the number of companies producing XBRL. And they will be producing XBRL because it will be as easy as producing an Excel, Word or PDF version of their financial statements for their bank. When the marginal additional cost to produce XBRL sinks to $0, there will be no reason not to produce reports in the standard.

Finally of course, XBRL being built into systems will support improved governance, risk management and effective internal controls – where that is not already being achieved through effective process implementation or re-engineering. It will definitely improve the external reporting process, by reducing internal data-friction.

So yes, the future of XBRL is so bright that if we focus on what XBRL can actually achieve, we can put away the Kool-Aid, and put on the shades!

26 April 2011

Looking back, looking forward - the SEC's XBRL program

(This was originally posted on the Insitutional Risk Analytics weekly newsletter by Christopher Whelan on 12 April 2011.) 

The US Department of Homeland Security now requires all airlines to provide a list of all US bound passengers before the airplane takes off from its originating airport. Why? Because waiting until the plane arrives to screen for potential terrorists or threats is wasteful. The information upon arrival may be accurate and complete, but it is no longer timely. 

Financial reporting to the markets is much the same, with audited annual reports and quarterly reports being provided to the SEC (and through them to the investor community) - in effect after the plane has landed. By the time the information is provided to the SEC, it may be accurate and complete, but it is no longer timely. The immediate buy-sell-hold recommendations and actions have already taken place at the time of the earnings release, and sometime before. In fact it is difficult to find anyone that actually looks at a 10-K in detail. 

In 2008 the SEC proposed a rule requiring registrants to provide XBRL (eXtensible Business Reporting Language) versions of their annual and quarterly reports (10K and 10Q) and for foreign filers to provide XBRL versions of their 20-F or 40-F filings. In 2009 the final rule was passed (33-9002) ( http://www.sec.gov/rules/final/2009/33-9002.pdf ) that created a three-year phase-in based on market capitalization and filing status of each registrant. 

This new reporting requirement was sold by the SEC as a step forward for the investors, by reducing the effort required to consume information (no more parsing of HTML or text documents) and improving quality of information reported (by removing manual re-keying errors). After all, if information can be consumed at the data element level, with a 'tag' telling the consuming computer what that piece of data is, then the entire process becomes quicker, cheaper and more accurate. 

The dream is great, the reality is additional reporting burden and cost for little visible benefit. And this is where the SEC can and should be focusing its efforts - demonstrate and communicating the benefits of XBRL, and push for greater adoption earlier in the reporting process. Because "Interactive Data" as the SEC calls XBRL, can deliver real time and cost saving, companies should be looking for ways to exploit the additional reporting power that XBRL provides, and the SEC should be fine tuning their program. 

A Short History

To find the origins of the SEC's XBRL program we need to go back to the grim days after Enron and Worldcom and the introduction of the "Full Employment in the Accounting, Auditing and Consulting Professions Act", also known as Sarbanes-Oxley (SOX). Numbers - 302 and 404 - became the newest form of torture out of Washington. CEO/CFO certifications of the effectiveness of Internal Controls ensured that no matter the economy, the auditors (and consultants) would be busy, for at least a few years through implementation and the first couple of years of operations. 

But buried in SOX was another number - Section 408 - which requires the SEC to review all filers not less than one every three years, and many filers every year. The highly manual (all right, manual takes on a new meaning when it means copying and pasting data from documents into spreadsheets, but none the less, by today's standards that is manual) processes at the SEC meant that these review requirements were simply unachievable. Something was needed, and the idea of tagged data, directly consumable by systems to automatically populate analytic engines looked, and still looks like just the answer to this problem. 

As an aside, when I asked a senior SEC official what they would say if Congress asked them if they were complying with section 408, he answered "Dan, we would look them in the eyes and say 'Yes, of course we are complying'." Then he smiled. 

Of course, the SEC didn't need SOX 408 to know that they needed to do something. They wanted to find the next Enron or Worldcom before a whistleblower or counterparty discovered it for them, the hard way. They knew they needed to do something, and the forward looking leadership began to press for improving the use of the information they already receive, or changing if necessary the format of the information received.

As SEC Chairman Christopher Cox can into office, with budget constraints and a system that was moving too slowly, he found an existing program in place exploring the concept of "tagged" data. Conrad Hewitt was an early supporter of the concept, and Jeff Naumann had already been brought over from the AICPA to explore the concept and if possible, provide a set of recommendations on how to move forward. 

At the same time, Jon Wisnieski at the FDIC (in conjunction with the FFIEC and OCC) was developing the new CDR project to upgrade the Call Report process. This project pushed XBRL out into the Call Report production software used, at that time, by 8200 banks across the United States for their quarterly reporting. 

The XBRL component of the CDR project went live in late 2005 and saw immediate benefits in terms of the quality of information reported to the FDIC, and dramatically reduced the overheads at the FDIC for analysis of banks. Reporting times dropped, data quality jumped almost overnight, with the number of banks that received queries from the FDIC each quarter dropping from around 35% to 5%. 

The same information, or a subset of that information, was then made available to the investor community through feeds from the FDIC, in one of three data formats, two compact legacy formats as well as the full XBRL document. IRA uses those feeds from the FDIC (but not in the XBRL format) to populate their database and feed their bank analytics and ratings. The key here is that the fact of XBRL in gathering, characterizing and validating the bank reports enables a multiplicity of data output choices for consumers.

And the SEC could only have been watching the FDIC with envy. 

Mandate

So in the heady days after the successful FDIC implementation, Barry Melancon at the AICPA received a call from Chairman Cox asking for a letter outlining the steps the SEC could and should take to implement an XBRL program. 

The timing could not have been better, as on the day of the call, an internal meeting at the AICPA took place in which one of the discussion items (informal of course) was when and how to wind up the AICPA's direct involvement in XBRL and when to sack the AICPA's Director of XBRL. It does not take much to imagine a possible change in tone, from "how do we reduce this overhead" to "how do we maintain control over the XBRL movement". 

As Chairman of the XBRL US Steering Committee at the time, the change was easy to see. One week the question from the AICPA was "can we spin off XBRL in 6 to 9 months?" Soon that had morphed into "we think it might take a couple of years to spin off XBRL into an independent entity." So a letter was written to Chairman Cox outlining the steps that the SEC could take to position itself to implement XBRL. 

The first and most important step was completion of the US GAAP taxonomy, which at the time was being built by dedicated volunteers, and simply was not ready. As Liv Watson of EDGAR Online said, an "Industrial Strength Taxonomy" was required. 

And so it was. 

In September 2006 Chairman Cox again called Melancon and this time asked how soon an independent XBRL entity could be established to be the contractor to build that industrial strength taxonomy, and could that new entity provide a proposal to the SEC for the development of the taxonomy. 

I've left out a number of steps that took place in between the letter and the call, including a meeting at the SEC in which I was asked how much the taxonomy would cost. My answer then was that I had been told by the Taxonomy Working Group that it would cost $4.5 million. The answer I got was "That's too bad, if it was a hundred million it would be easier to get appropriated than $5 million - that's just the way Washington works." 

None the less, as part of an EDGAR system upgrade program, the SEC budgeted $5.5 million for the XBRL US GAAP taxonomy, with the contract to be fulfilled by the newly created XBRL US Inc. 

With the coming change in administration at the White House, or at least an assumption of a coming change, it was clear that if Chairman Cox was going to get the credit for modernizing the reporting environment, an XBRL proposed rule would be needed, and a final rule voted on by the Commission by late 2008. 

The clock was ticking. 

At the same time, the Pozen Committee, while supporting the introduction of XBRL, recommended a phased in approach. The Committee's concerns turned out to be spot-on. Was there adequate software available in the market to use pure XBRL documents? Were there an adequate pool of resources that understood XBRL file creation? Most important, would the cost to filers be comensurate with the benefits and thus acceptable? But the ghost of SOX haunted the program.

In 2008, the proposed rule was issued and subsequently voted on to become the final rule, with it's three year phase-in. In 2009 the first XBRL instance documents began to arrive at the SEC. Giving the SEC credit, the estimated cost of implementation per company for first year (non-detailed tagged data) was up to $80,000. A review after the first year found the experiences of companies to be very close to that level of cost. The SEC had no idea what the cost of detailed tagging would be. In the two years since the first companies provided XBRL, costs have come down, the software has become much better, but there remains a chronic shortage of skilled XBRL specialists. 

Who benefits?

So now the largest 1500 public companies companies across America are producing and providing XBRL versions of their financial statements to the SEC. In addition, some companies are using XBRL as the opportunity to improve their internal reporting processes, pushing XBRL farther back into their reporting systems. United Technologies ("UTX") is a good example, having used XBRL as the catalyst to improve their external reporting processes, saving over 800 person hours per quarter (before the detailed tagging requirement, but that is a different issue). 

The other 8700 companies (the number estimated by the SEC in the "final rule") will be providing XBRL for the first time with their second quarter 2011 filings - their 10Qs due on August 15th. Foreign filers filing in IFRS will be providing their 20-F or 40-F filings in XBRL starting with their 2011 annual reports, provided the SEC approves the IFRS taxonomy. 

So other than those companies using XBRL to re-engineer their external reporting process, the primary beneficiary today is the SEC. As mentioned in the introduction, the investor community has yet to demonstrate significant interest (other than pockets here and there) in XBRL, simply because the information while complete and accurate, is not timely. It is timely for the SEC, as their analysis is based on the audited and reviewed financial statements, not on the earnings release. Of note, a separate mandate requires Mutual Funds to provide the risk and return summary in XBRL, beginning January 2011. These filing are already arriving at the SEC. 

10 April 2011

Summary of articles for 2011 first time XBRL filers

Over the past eighteen months at the Random Comments blog, we have posted a number of articles and comments that we think are relevant to the Group 3, or Year 3 filers - those that must provide XBRL to the SEC for the first time in 2011. Instead of making you search and wade through the other stuff, we thought it was time to publish a quick directory.

The following is a quick directory of articles broadly grouped by topic. Over the time it seems there have been some consistent themes, including the cost and burden of XBRL, the availability (or otherwise) of experts, background type posts and some predictions. We hope this is helpful to anyone faced with their first year of XBRL. 


Cost and Burden 

2. Is XBRL Expensive? (Costs range from Under $8,000 to over $30,000).
3. Why is XBRL so Expensive (Cost Factors) Mainly people time and software, and the interaction of the two.
5. More on estimated costs How the SEC estimated the cost of XBRL com(January 2010)   
6. Never confuse cost with quality (Quality vs Price 1, Quality vs Price 2)   

Availability of Experts

9.  Over 10,000 companies to create XBRL this year - expertise will be in short supply   
11. Cost of (voluntary) assurance over XBRL today    

Options for filers

12. As published in IRWeb Report - the 2011 XBRL Buyers Guide   
13. Should you replace your entire external reporting framework to produce XBRL?   
14. 5 Questions filers should ask any prospective service provider or software vendor.  
15. Outline of first time filer options.  

Background

16. Why did the SEC mandate XBRL? Remember SOX?   
17. Observations about recent filings (March 2011)   
18. XBRL and the audit and assurance industry  How some auditors are using XBRL as a marketing tool. 
19. A conversation with David Blaszkowsky of the SEC (Part 1, Part 2)   

Predictions

20. Prediction time (detailed tagging)
21. Assurance anyone? It will happen, but first the auditors need to figure out how to.

Guest Posts 
22. Dennis Santiago of Institutional Risk Analytics (XBRL Usability - Part 1, Part 2)

21 March 2011

Observations from the (XBRL) Cloud - 15th March

As more XBRL is produced and provided to the SEC, what can we say about the instance documents as a whole, and with limited information, what general observations can we make?

The first observation I would make is that the SEC's validation and acceptance of instance documents probably needs to be tightened up a bit. Arguably, the SEC should not be accepting a filing with over 95 SEC Edgar Filer Manual (EFM) validation errors. Likewise, when some of the largest filers are providing XBRL with 67% of elements being extensions, it suggests that detailed tagging might produce an awful lot of data, but precious little comparable data.

From a software or vendor perspective, there are also some interesting observations. The first, of course, is that the name of the software used to produce the XBRL may or may not relate to the actual parties that produced the XBRL. Certainly the software provider is named, but the filing agent or outsource provider (for example) is not. Still, even the information that is available is illuminating.

So before I jump in to some of what I've seen, I'd like to thank Cliff Binstock for producing and providing the XBRLCloud Report online. This is a fantastic resource for performing a simple 'health check' on the current state of XBRL filings with the SEC. On 15th March 2011, I visited the XBRLCloud report and copied it into Excel - the report currently is limited to around the 1500 most recent filings, due no doubt to the size of the downloaded information. For my purposes this is fine (although I would love to see all the data) as it roughly covers the first filing season of this year, with most of the first and second wave filers reporting.

Key findings

In terms of the results, I've picked out some specific information, and run some averages. For each of these, I deliberately don’t mention the specific company or software solution that was used to create the instance document, as I'm sure all the software vendors will be pointing out their successes (be it the most filings, the cleanest filings, the fastest or the cheapest). There is no need to point out the software vendors or filing agents (where they are the same) whose tools produced the instance documents with the most errors, warnings or inconsistencies (I call these the dirtiest, though that's probably a bit unfair), or the most extended filings.

One interesting observation is that it seems easier to get an SEC EFM Error accepted than a Warning. For example, the instance document with the most Errors (EFM) had 96, while the instance document with the most Warnings had only 8. Of course, when we look at inconsistencies, the largest had 775.  Now that's impressive. I could well imagine a conversation after seeing a list of 775 inconsistencies: "But will the SEC accept it?" - "Yes" - "Then just file the damn thing and move on."

Errors (EFM)

As already mentioned, the highest number of SEC EFM errors in a filing was 96. But there were an 90 examples of instance documents with EFM errors. The good news is that means there were over 1400 instance documents with zero (0) EFM errors. Furthermore, no vendor is over-represented in the EFM errors category, with some having no EFM errors in filings created with their software. However, while some vendors had zero Errors, no vendor had zero Errors, Warnings or Inconsistencies.

Moving into Warnings and Inconsistencies, the numbers become a little more interesting, with the highest "scores" (so to speak) being 8 and 775 respectively. As mentioned above, it is remarkable that the highest number of Errors in an accepted filing is 12 times the highest number of Warnings.

Errors (GAAP Architecture)

Looking at GAAP Architecture errors, the numbers are also a little disturbing, with the highest number of errors being 382. That's a huge number of GAAP Architecture errors in one filing, and again, suggests more of a "running out of time" than an acceptance that errors are acceptable.

As with the EFM Errors to Warnings ratio, there is a similar ratio in the GAAP Architecture errors, with the highest number of Warnings being 26 in a filing. Again, there is no specific pattern across the vendors.

Extensions

The biggest issue visible to all is the percentage of extensions. 23 of the filings in the list have over 50% extensions, with the highest being 67% which translates to two out of every three data points reported and tagged in XBRL were extensions.

The entire point of an extension is to provide the capability to report information that represents a unique aspect of a company's operations or reporting. As David Blaszkowsky said to me some months ago ""Well, the number of extensions is being seen, or presented, as a bad thing. Actually, I could not disagree more. I'm quite excited about extensions. Thinking about it, extensions will reveal unique differences in substance between companies. Isn't that the whole point?" Unfortunately the level of extensions in some filings simply makes a mockery of that idea

Why so many extensions? Who knows? Could it be that the US GAAP taxonomy is inadequate for some types of organization? Could it be that it was simply faster to create a new extension for each item than to search for and use the appropriate element? Was there a desire to "be transparent while ensuring opacity"? It certainly suggest that the FASB has its job cut out updating the taxonomy with elements to enable comparability.

I do not know the answer, but "I'll tell you what I know - England Prevails". (Oops, sorry about that - 10 "attaboys" to the first person who correctly identifies the movie that is from).

But back on track; I do not know the answer, but I cannot imagine that XBRL tagged information that is two-thirds extensions actually adds any value to the analysis of an individual company, or provides any assistance in sectoral or industry analysis.

Observations

I said that I would not discuss individual vendors and their results. The averages across vendors/software are not significantly different, with some highlights and some, well, low-lights in the data. In addition, for some of the software solutions there are too few filings to make analysis and discussion meaningful. One "high error" report where there are only six filings using that software does not prove a trend, especially when the other filings using the same software all look clean.

Equally, the XBRLCloud report does not let us see what filing agent or provider produced the instance document (except of course in cases where the filing agent and the software provider are the same).

However, it is possible to see what software is not included in the Cloud Report. That said, with a "free" offer I'm confident that they will be back on the list soon. Then again, is it possible that after their first filing (and the client subsequently moving to a different provider) a "free" offer will be a good way to attract new clients? Never mind that an audience recently heard at a show-case presentation that the actual cost of the solution would be $12,000 or more...

Summary

The XBRLClould report contains a wealth of information about filings with the SEC, and it is well worth taking a wander through the results. At the same time, it is also possible to use that information selectively. After all, we can be pleased with the number of filings and the smooth progression of the mandate but the XBRLCloud Report certainly raises questions.

Regardless, my own conclusions include:

  1. There are still too many errors getting through (or being accepted) by the SEC's own validation.
  2. Extensions are out of control. Is this because companies wish to create extensions (thus reducing comparability) or because the elements do not exist. Or is there simply not enough time?