Even though computers can't read, they're very effective at extracting information from natural language text. They can determine the main themes in the text, figure out if the writers of the text have positive or negative feelings about what they've written, decide if two documents are similar, add labels to documents, and more.
This course shows you how to accomplish some common NLP (natural language processing) tasks using Python, an easy to understand, general programming language, in conjunction with the Python NLP libraries, NLTK, spaCy, gensim, and scikit-learn. The course is designed for basic level programmers with or without Python experience.
Gain practical hands-on natural language processing experience using Python
Understand how to tokenize text so it can be processed as symbols
Learn to convert text and words to vectors using TF-IDF and word2vec
Explore dependency parsing, sentiment analysis, and LDA topic modeling
Learn to find named entities in text and map them to an external knowledge base
Understand the capabilities and limitations of natural language text processing
Jonathan Mugan is CEO and co-founder of DeepGrammar, a natural language processing company. Jonathan has a PhD in computer science from the University of Texas, and has been working in AI and machine learning since 2003. He describes his research focus as "making the squishy reality of our everyday world available to computation."
Dr. Mugan specializes in artificial intelligence and machine learning. His current research focuses in the area of deep learning, where he seeks to allow computers to acquire abstract representations that enable them to capture subtleties of meaning. Dr. Mugan received his Ph.D. in Computer Science from the University of Texas at Austin. His thesis work was in the area of developmental robotics where he focused on the problem of how to build robots that can learn about the world in the same way that children do.