Recent advancements in data and technology, such as generative AI, have given dealmakers more opportunities to establish a proprietary edge and competitive advantage than ever before. But challenges like information silos and overwhelmed teams are also on the rise as firms struggle to adapt and embrace these new tools and tactics.
To learn more about how the most data- and tech-savvy firms create proprietary solutions, we hosted a webinar featuring Martin Pomeroy, Chief Product Officer of Filament Syfter, and Josh Giglio, VP of Product at Sourcescrub. During our conversation, our experts shared their insight into how dealmakers can combine critical data sources with their own “special sauce” to craft a one-of-a-kind sourcing engine. Let’s take a look at some of their top tips.
While firms’ rapid adoption of data-driven and AI-powered technology is exciting, it can also create new problems if done incorrectly. For example, implementing a number of new tools that each provide a small part of the bigger data picture and don’t “talk” to” one another forces dealmakers to manually copy and paste information across systems. This leaves little time for actual data analysis and can also lead to harmful data silos and inaccurate insights.
To help mitigate these types of issues, Josh and Martin recommend firms focus on two key attributes when evaluating new data technology partners: quality and access. “How are you thinking about the balance of quality to depth or coverage within the data?” asks Josh. “Is the provider comprehensive in their approach? Are there multiple sources and attribution points for data within the system? Is there rigor in data validation and monitoring? Do you know the health of that data set and is it measurable?”
Regardless of whether you’re selecting the first or third data service provider in your technology stack, be sure to consider how easy the data will be for your team to access and how transportable it is. Accessibility is a prerequisite for usability, so Martin advises asking potential vendors questions like, “Are there download limits? Do you provide an API? Is this something that integrates with our CRM?”
Different systems classify information differently. Data points around ownership or growth rate for the same company may not align across tools or match the way your firm thinks about them — and that’s okay. The most successful firms are those that are able to harness data from across various sources and build proprietary systems, taxonomies, and models that define and inform their unique investment criteria.
“It’s only when you apply both your internal knowledge and your internal thinking or view of the world on that data that you get that unique instance that none of your competitors will have,” explains Martin. For instance, many Sourcescrub customers leverage and combine data signals like employee count, conference attendance, and job openings to build custom scoring models that help determine the growth trajectory and investment readiness of private companies.
Other firms may use this same approach to create an ESG scoring system or accurately pinpoint opportunities with SaaS business models, for example. Consistently measuring the outcome of your approach and refining your inputs over time is key to improving the speed and precision with which your firm is able to source the targets that are the best fit for your specific thesis. “You need a system where you can continuously tailor and reinforce your input to get more and more output that’s custom and proprietary to your particular type of targets you would pursue,” says Josh.
One of the biggest questions when deciding whether to adopt new technology is whether to build it or buy it. However, a competitive edge can’t be bought. Using the same pre-built solutions and out-of-the-box reports as all other dealmakers only gets you so far. At the same time, bringing data and complex technology in-house is daunting and difficult to manage.
That’s why Martin encourages all dealmakers to “buy their build.” This involves purchasing a solution that not only serves as a foundation for your firm’s data- and AI-driven processes, but that can also be customized according to your unique workflows and proprietary strategies. The tool should also handle basic yet critical needs like security and user provisioning so your team doesn’t have to worry about them.
Not sure where to focus your customization efforts? Josh recommends using a modification of the WINS framework to help determine the areas where AI and process automation will have the biggest impact. WINS refers to work that involves “words, images, numbers, and sounds,” which are all areas where AI can provide massive efficiency gains. In addition to leveraging the WINS framework, Josh also suggests thinking about implementing AI and automation across four key categories: Inference or prediction, summarization, content generation, and repetitive process automation.
Using the latest data and technology to transform your firm’s sourcing efforts from reactive to proactive and predictive is no longer an option. These three strategies are just the tip of the iceberg of what our expert panelists shared during our webinar. To learn more, watch the full session on-demand.