
Uri Jablonowsky
Founder • DIGBI

In today's hyper-connected world, B2B customers are more vocal than ever about their product experiences. From G2 reviews and user community posts to support tickets and sales feedback, user-generated content has become a goldmine of authentic insights. But here's the challenge: how do product teams efficiently transform hundreds of unstructured customer comments into actionable development priorities?
Recent breakthrough research from Tongji University demonstrates how Large Language Models (LLMs) can revolutionize this process for businesses of all sizes, offering a sophisticated approach to mining customer needs from scattered feedback sources.
The Traditional ChallengeTraditional B2B market research—customer advisory boards, surveys, and stakeholder interviews—provides valuable insights but comes with limitations: high costs, lengthy timelines, and potential gaps in capturing real usage patterns. User-generated content from review platforms, support channels, and community forums presents a rich alternative, but analyzing hundreds of detailed enterprise feedback comments manually becomes impractical. The research team demonstrated this potential using Tesla Model 3 feedback, processing over 15,000 user comments—a scale that mirrors the challenge many growing B2B companies face with their own customer data.
LLMs as Customer Needs Mining EnginesThe study demonstrates how LLMs can serve as sophisticated parsing engines, transforming messy feedback into clean, categorized requirements. Through a multi-stage framework, LLMs can:
Extract and Classify Customer Needs: Processing raw user feedback to identify specific product attributes while capturing associated sentiment.
Align Similar Requirements: Different customers express the same need using different language. The LLM framework successfully grouped related requirements, reducing 3,729 initial categories down to 38 core areas.
Generate Industry-Standard User Stories: The system transforms informal complaints into structured improvement recommendations. Instead of parsing hundreds of "battery problem" comments, teams receive organized insights about range optimization, charging compatibility, and winter performance concerns.
Automated Customer Story GenerationLLMs demonstrate remarkable ability to convert freestyle customer reviews into industry-standard user stories. When customers write informal comments like "the parking buzzer was also very low," the LLM framework extracts the core need (improved audio feedback systems) and structures it for development teams.
The study showed LLMs achieved 78.26% accuracy in extracting structured information—comparable to specialized deep learning models but without requiring extensive training data. The automated generation of comprehensive improvement reports represents another breakthrough, producing detailed recommendations organized by priority and feasibility.
Where Human Expertise Remains EssentialWhile LLMs excel at processing and structuring feedback, human judgment remains irreplaceable in key areas. The research specifically noted that "LLM Agent is weak in feasibility analysis, which needs to comprehensively analyze data from various aspects."
Complex feasibility assessments require domain expertise, understanding of technical constraints, and strategic business priorities. The researchers incorporated expert analysis into their framework, allowing LLMs to organize information while human specialists evaluate technical and economic feasibility.
This human-AI collaboration proves particularly valuable for B2B product teams who can query structured customer databases while referencing LLM-generated reports. The combination enables data-driven decisions while preserving the strategic thinking and domain expertise that only humans can provide—especially crucial when evaluating enterprise client needs and technical constraints.
Robustness and Real-World ApplicationThe study's most encouraging finding concerns system robustness. Even when prompts were perturbed through translation or paraphrasing, the system maintained over 96% success rate in producing structured outputs. This resilience suggests LLM-powered analysis can perform reliably across varying input quality.
Strategic Competitive Intelligence Through Customer FeedbackFor B2B product managers, understanding your customers' needs is only half the equation—you also need to know how competitors address similar challenges. DigBI's agentic competitive intelligence system extends the research principles by analyzing how competitors handle specific features through weighted evaluation of customer reviews across industry ecosystems. When your enterprise clients mention "integration difficulties," our platform reveals how competitors tackle this issue and where opportunities exist for superior solutions.
Our approach helps growing B2B companies systematically process customer feedback about competitive offerings to discover untapped market niches and unmet needs. Rather than requiring massive datasets or data science teams, we make competitive intelligence accessible to product managers who need actionable insights today—empowering companies to establish secure market positioning through superior customer-centric innovation.