To overcome these long-standing bottlenecks, our Data Lab leverages AI to illuminate granular, concrete, real-world information and construct datasets specifically designed for implementation-ready industrial policy analysis. Recent advances in large language models (LLMs) have made it possible to design and deploy legions of AI researchers that mimic highly-trained human researchers to assemble and structure vast amounts of information. Using these tools, our data team has developed a first-of-its-kind auto-research algorithm that draws on publications, intellectual property rights records, business-to-business price data, targeted web search, and more to discover and characterize productive capabilities at a sufficiently fine level of detail to move from rough analysis to practical policy design and implementation. This includes fine-grained information about productive capabilities, technological dependencies, and value adding processes (firm-level capacities, talent availability, value added, cost, and profit data, resource use, etc.).
Our tools are already being piloted in industrial value chain targeting projects with the World Bank, for the Romanian government; and in large scale company surveys with the European Commission and the World Bank: interview-based surveys of 1,964 SMEs from 34 countries in their local languages using our Verbatim tool.