Sumanta Muduli
Data Scientist at Flutura Decision Sciences & Analytics at almaBetter
What are some of the applications of NLP..
Computers are great at working with structured data like spreadsheets and database tables, but we humans usually communicate in words, not in tables. Computers couldn’t understand those. To solve this problem, we have to come up with some advanced techniques. In NLP, we use some very smart techniques that convert languages to useful information like numbers or some mathematically interpretable objects so that we could use them in ML algorithms based upon our requirements.
Machine Learning needs data in numeric form. We first need to clean the textual data and this process to prepare(or clean) text data before encoding is called text preprocessing, this is the very first step to solve the NLP problems. SpaCy, NLTK are some libraries used to make our tasks of preprocessing easier.
Cleaning
1. Removing URL-
Importing re library to remove URL.
Loading...
Punctuation is basically the set of symbols [!”#$%&’()*+,-./:;<=>?@[]^_`{|}~]:
Loading...
3. Converting all to lower case
Loading...
4. Removing stopwords
Loading...
5. Tokenization-
It’s a method of splitting a string into smaller units called tokens. A token could be a punctuation, word, mathematical symbol, number etc.
Loading...
6. Stemming and Lemmatization-
Loading...
Loading...
7. Removing small words having length ≤2
After performing all required process in text processing there is some kind of noise is present in our corpus, so like that i am removing the words which have very short length.
Loading...
8. Convert the list into string back
Loading...
Now we are all set to vectorize our text.
Loading...
2. TF-IDF: In TF-IDF we transform a count matrix to a normalized tf: term-frequency or term-frequency times inverse document-frequency representation using TfidfTransformer. The formula that is used to compute the tf-idf for a term t of a document d in a document set is:
Loading...
Note-
In CountVectorizer we only count the number of times a word appears in the document which results in biasing in favour of most frequent words. This ends up in ignoring rare words which could have helped is in processing our data more efficiently.
To overcome this , we use TfidfVectorizer .
In TfidfVectorizer we consider overall document weightage of a word. It helps us in dealing with most frequent words. Using it we can penalize them. TfidfVectorizer weights the word counts by a measure of how often they appear in the documents.
That’s all folks, Have a nice day ????
Related Articles
Top Tutorials