Some doubts modelling some features for the libsvm/scikit-learn library in python

2024/10/16 3:21:25

I have scraped a lot of ebay titles like this one:

Apple iPhone 5 White 16GB Dual-Core

and I have manually tagged all of them in this way

B M C S NA

where B=Brand (Apple) M=Model (iPhone 5) C=Color (White) S=Size (Size) NA=Not Assigned (Dual Core)

Now I need to train a SVM classifier using the libsvm library in python to learn the sequence patterns that occur in the ebay titles.

I need to extract new value for that attributes (Brand, Model, Color, Size) by considering the problem as a classification one. In this way I can predict new models.

I want to represent these features to use them as input to the libsvm library. I work in python :D.

  1. Identity of the current word

I think that I can interpret it in this way

0 --> Brand
1 --> Model
2 --> Color
3 --> Size 
4 --> NA

If I know that the word is a Brand I will set that variable to 1 (true). It is ok to do it in the training test (because I have tagged all the words) but how can I do that for the test set? I don't know what is the category of a word (this is why I'm learning it :D).

  1. N-gram substring features of current word (N=4,5,6)

No Idea, what does it means?

  1. Identity of 2 words before the current word.

How can I model this feature?

Considering the legend that I create for the 1st feature I have 5^(5) combination)

00 10 20 30 40
01 11 21 31 41
02 12 22 32 42
03 13 23 33 43
04 14 24 34 44

How can I convert it to a format that the libsvm (or scikit-learn) can understand?

4. Membership to the 4 dictionaries of attributes

Again how can I do it? Having 4 dictionaries (for color, size, model and brand) I thing that I must create a bool variable that I will set to true if and only if I have a match of the current word in one of the 4 dictionaries.

  1. Exclusive membership to dictionary of brand names

I think that like in the 4. feature I must use a bool variable. Do you agree?

If this question lacks some info please read my previous question at this address: Support vector machine in Python using libsvm example of features

Last doubt: If I have a multi token value like iPhone 5... I must tag iPhone like a brand and 5 also like a brand or is better to tag {iPhone 5} all as a brand??

In the test dataset iPhone and 5 will be 2 separates word... so what is better to do?

Answer

The reason that the solution proposed to you in the previous question had Insufficient results (I assume) - is that the feature were poor for this problem.

If I understand correctly, What you want is the following:

given the sentence -

Apple iPhone 5 White 16GB Dual-Core

You to get-

B M C S NA

The problem you are describing here is equivalent to part of speech tagging (POS) in Natural Language Processing.

Consider the following sentence in English:

We saw the yellow dog

The task of POS is giving the appropriate tag for each word. In this case:

We(PRP) saw(VBD) the(DT) yellow(JJ) dog(NN)

Don't invest time on understanding the tags in English here, since I give it here only to show you that your problem and POS are equal.

Before I explain how to solve it using SVM, I want to make you aware of other approaches: consider the sentence Apple iPhone 5 White 16GB Dual-Core as test data. The tag you set to the word Apple must be given as input to the tagger when you are tagging the word iPhone. However, After you tag the word a word, you will not change it. Hence, models that are doing sequance tagging usually recievces better results. The easiest example is Hidden Markov Models (HMM). Here is a short intro to HMM in POS.

Now we model this problem as classification problem. Lets define what is a window -

`W-2,W-1,W0,W1,W2`

Here, we have a window of size 2. When classifying the word W0, we will need the features of all the words in the window (concatenated). Please note that for the first word of the sentence we will use:

`START-2,START-1,W0,W1,W2`

In order to model the fact that this is the first word. for the second word we have:

`START-1,W-1,W0,W1,W2`

And similarly for the words at the end of the sentence. The tags START-2,START-1,STOP1,STOP2 must be added to the model two.

Now, Lets describe what are the features used for tagging W0:

Features(W-2),Features(W-1),Features(W0),Features(W1),Features(W2)

The features of a token should be the word itself, and the tag (given to the previous word). We shall use binary features.

Example - how to build the feature representation:

Step 1 - building the word representation for each token:

Lets take a window size of 1. When classifying a token, we use W-1,W0,W1. Say you build a dictionary, and gave every word in the corpus a number:

n['Apple'] = 0
n['iPhone 5'] = 1
n['White'] = 2
n['16GB'] = 3
n['Dual-Core'] = 4
n['START-1'] = 5
n['STOP1'] = 6

Step 2 - feature token for each tag:

we create features for the following tags:

n['B'] = 7 
n['M'] = 8
n['C'] = 9 
n['S'] = 10 
n['NA'] = 11
n['START-1'] = 12
n['STOP1'] = 13

Lets build a feature vector for START-1,Apple,iPhone 5: the first token is a word with known tag (START-1 will always have the tag START-1). So the features for this token are:

(0,0,0,0,0,0,1,0,0,0,0,0,1,0)

(The features that are 1: having the word START-1, and tag START-1)

For the token Apple:

(1,0,0,0,0,0,0)

Note that we use already-calculated-tags feature for every word before W0 (since we have already calculated it) . Similarly, the features of the token iPhone 5:

(0,1,0,0,0,0,0)

Step 3 concatenate all the features:

Generally, the features for 1-window will be:

word(W-1),tag(W-1),word(W0),word(W1)

Regarding your question - I would use one more tag - number - so that when you tag the word 5 (since you split the title by space), the feature W0 will have a 1 on some number feature, and 1 in W-1's model tag - in case the previous token was tagged correctly as model.

To sum up, what you should do:

  1. give a number to each word in the data
  2. build feature representation for the train data (using the tags you already calculated manually)
  3. train a model
  4. label the test data

Final Note - a Warm Tip For Existing Code:

You can find POS tagger implemented in python here. It includes explanation of the problem and code, and it also does this feature extraction I just described for you. Additionally, they used set for representing the feature of each word, so the code is much simpler to read.

The data this tagger receives should look like this:

Apple_B iPhone_M 5_NUMBER White_C 16GB_S Dual-Core_NA

The feature extraction is doing in this manner (see more at the link above):

def get_features(i, word, context, prev):'''Map tokens-in-contexts into a feature representation, implemented as aset. If the features change, a new model must be trained.'''def add(name, *args):features.add('+'.join((name,) + tuple(args)))features = set()add('bias') # This acts sort of like a prioradd('i suffix', word[-3:])add('i-1 tag', prev)add('i word', context[i])add('i-1 word', context[i-1])add('i+1 word', context[i+1])return features

For the example above:

context = ["Apple","iPhone","5","White","16GB","Dual-Core"]
prev = "B"
i = 1
word = "iPhone"

Generally, word is the str of the current word, context is a the title split into list, and prev is the tag you received for the previous word.

I use this code in the past, it works fast with great results. Hope its clear, Have fun tagging!

https://en.xdnf.cn/q/69204.html

Related Q&A

Python ReportLab use of splitfirst/splitlast

Im trying to use Python with ReportLab 2.2 to create a PDF report. According to the user guide,Special TableStyle Indeces [sic]In any style command the first row index may be set to one of the special …

Extract specific section from LaTeX file with python

I have a set of LaTeX files. I would like to extract the "abstract" section for each one: \begin{abstract}.....\end{abstract}I have tried the suggestion here: How to Parse LaTex fileAnd tried…

Installing LXML, facing a legacy-install-failure error

Trying to install lxml on Python 311. Faced with this error. PS C:\Users\chharlie\Desktop\code> pip install lxml Collecting lxmlUsing cached lxml-4.9.1.tar.gz (3.4 MB)Preparing metadata (setup.py) .…

PyInstaller wont install, Python 3.6.0a4 and x64 Windows

I have said Python version (from https://www.python.org/downloads/windows/), and x64 Windows 10. Every time I try to execute "pip install pyinstaller" it crashes with an error:C:\WINDOWS\syst…

matplotlib figures disappearing between show() and savefig()

Ive kept a set of references to figures in a dictionary so that I could save them later if desired. I am troubled that the saved figures are blank if invoke a show() command and look at them first. S…

Regular expression that never finishes running

I wrote a small, naive regular expression that was supposed to find text inside parentheses:re.search(r\((.|\s)*\), name)I know this is not the best way to do it for a few reasons, but it was working j…

fatal error: Python.h: No such file or directory, python-Levenshtein install

Firstly, Im working on an Amazon EC2 instance, Amazon linux version 2 AMI using Python 3.7.Im trying to install the python-Levenshtein package using the command:pip3 install python-Levenshtein --useran…

How to install data_files to absolute path?

I use pip with setuptools to install a package. I want pip to copy some resource files to, say, /etc/my_package. My setup.py looks like this: setup(...data_files=[(/etc/my_package, [config.yml])] )Whe…

Confusion matrix for Clustering in scikit-learn

I have a set of data with known labels. I want to try clustering and see if I can get the same clusters given by known labels. To measure the accuracy, I need to get something like a confusion matrix. …

Why is Python 3 (or later) better than Python 2?

I learned Python as my first serious (non BASIC) language about 10 years ago. Since then, I have learned lots of others, but I tend to think in Python. When I look at the list of changes I do not see o…