Webinar: [Apple | Organization] and [Oranges | Fruit] – How to Evaluate NLP Tools for Entity Extraction
[Apple | Organization] and [Oranges | Fruit]
How to Evaluate NLP Tools for Entity Extraction
You have some documents and you want to extract information, but which NLP tool or library to use? You have many from which to choose but how to evaluate which is best? Evaluating NLP tools is not a straightforward exercise since differences in output between tools often prevent direct comparison. NLP tools based on different underlying technologies will tag text differently, extract different sets of entities and classify those entities differently. Basis Technology often performs evaluations between disparate NLP tools and encounters these challenges. Learn how we handle them to produce meaningful scoring of tools.
This webinar will also include best practices for annotating a test data set and selecting a gold standard, as well as common ways to measure both the accuracy of the annotation and the extraction.
A 40-minute presentation will be followed by a Q&A session.
April 30 at 11:30 am ET
8:30 am: PT
11:30 am: ET
4:30 pm: London
6:30 pm: Tel Aviv
VP Engineering, Text Analytics
Gil leads the engineering team responsible for text analytics including existing products and new technology initiatives. He has nearly 30 years of experience in developing software and leading engineering teams, including work done at Curl (now part of Sumitomo Corporation), GTECH (now part of IGT PLC) and Constant Contact. Gil holds a BS in computer science from Cornell University, an MA in Liberal Arts from Harvard University and a certificate in management from MIT’s Sloan School of Management.