Key Takeaways:
- The rise of the knowledge economy and service sector has made it challenging to measure national income and product accounts, leading to an underestimation of GDP.
- AI’s impact on the economy is underestimated due to the treatment of AI as expensed intangible capital, mismeasurement of quality change and service flows, and scalability and spillovers of AI.
- A comprehensive approach to measuring AI’s impact on the economy must account for both intangible benefits and costs, including cybersecurity risk, privacy loss, and intellectual property erosion.
- A two-horizon measurement agenda is proposed, including a near-term Generative AI Intensity Index and medium-term reforms to national accounts.
- Better measurement of AI is crucial for informed policy decisions on innovation, competition, and the labor market.
Introduction to the Challenge of Measuring AI’s Impact
The rise of the knowledge economy and service sector has made it increasingly difficult to accurately measure national income and product accounts, which underpin gross domestic product (GDP) calculations. As Ellen McGrattan and the late Edward Prescott’s research has shown, the failure to incorporate intangible capital into macroeconomic models leads to an underestimation of GDP. This issue is further complicated by the growing importance of intangible capital in driving investment and growth in certain sectors. As noted in the article, "the share of output growth that should be attributed to the service flow of AI assets is either misassigned to total factor productivity or disappears entirely, and sectors that adopt early can look weak before adoption and strong once the intangible capital is in place."
The Underestimation of AI’s Impact
The article highlights three reasons why AI’s impact on the economy may be underestimated. Firstly, AI is often treated as expensed intangible capital, which means that it is recorded as an operating cost rather than an investment. This masks the buildup of productive capacity and produces a J-curve dynamic, where expenditures and organizational adjustment arrive first, depressing measured productivity, while the associated output gains materialize only after firms have redesigned workflows and integrated AI into decision processes. Secondly, the price and quantity of AI-related spending are poorly measured, with statistical agencies relying on sum-of-costs valuation and generic software deflators that miss quality changes. Thirdly, a large share of the value created by digital services never passes through priced market transactions, with much of that value taking the form of consumer surplus and cross-firm spillovers that conventional accounts do not capture.
Unmeasured Costs of AI
In addition to the unmeasured benefits of AI, there are also unmeasured costs associated with its adoption. These include cybersecurity risk, privacy loss, and intellectual property erosion. For example, the article notes that "companies with high cybersecurity exposure (e.g., many unresolved vulnerabilities in their networks) significantly underperform in the stock market, with roughly a 0.33% lower return per month compared to more secure peers." Furthermore, the use of copyrighted material to train AI models has sparked a wave of lawsuits and concerns about the "tragedy of the commons" in creative industries. As the article states, "AI firms are appropriating value (i.e., training on their work and potentially displacing their market) without compensation, effectively shifting value away from original producers."
A Two-Horizon Measurement Roadmap
To address these challenges, the article proposes a two-horizon measurement agenda. In the near term, a Generative AI Intensity Index could be developed to track AI adoption by sector and region, using statistical surveys and provider telemetry. This index would provide a real-time metric of AI usage and could be mapped to industries and regions. In the medium term, reforms to national accounts could be implemented to separately identify AI-related capital, services, labor reallocation, and household time use. As the article suggests, "an ideal ‘AI satellite account’ or improved GDP methodology would net out not just the hidden benefits of AI but also these intangible liabilities—from data breaches and privacy loss to IP disputes—to fully understand AI’s net contribution to welfare and productivity."
Concrete Policy Applications
A well-implemented AI Intensity Index and improved AI accounts would have direct policy applications. For example, the index could serve as an early warning system for labor transitions in different sectors, allowing policymakers to proactively ramp up worker retraining programs or education curricula. The data could also inform macroeconomic policy, helping to distinguish productivity-driven growth from weak demand. Additionally, the index could guide policymakers on where to promote diffusion, and it could aid in program evaluation after policy implementation. As the article notes, "better measurement is not just an academic exercise—it directly enables smarter, more timely policy responses in workforce development, macroeconomic management, infrastructure, and innovation strategy."
Conclusion
In conclusion, measuring AI’s impact on the economy is a complex challenge that requires a comprehensive approach. By acknowledging the unmeasured benefits and costs of AI, and by developing a two-horizon measurement agenda, policymakers can gain a better understanding of AI’s impact on the economy. As the article states, "AI now merits the same treatment" as other technologies that have been tracked through satellite accounts and other measurement systems. By investing in a dedicated AI measurement program, the United States can ensure that it has the data it needs to make informed policy decisions about innovation, competition, and the labor market.
Counting AI: A blueprint to integrate AI investment and use data into US national statistics

