Motivation for semi-supervised algorithm is to overcome the problem of lack of annotated corpus and data scarcity problem. Semi-supervised usually starts with small amount of annotated corpus, large amount of un-annotated corpus and a small set initial hypothesis or classifiers. With each iteration, more annotations are generated and stored until a certain threshold occurs to stop the iterations[13]. The term “semi-supervised"(or “weakly supervised") is relatively recent. The main technique for SSL is called “bootstrapping" and involves a small degree of supervision, such as a set of seeds, for starting the learning process. For example, a system aimed at “disease names" might ask the user to provide a small number of example names. Then the system searches for sentences that contain these names and tries to identify some contextual clues common to the random number of examples, say five examples. Then, the system tries to find other instances of disease names that appear in similar contexts. The learning process is then reapplied to the newly found examples, so as to discover new relevant contexts. By repeating this process, a large number of disease names and a large number of contexts are then identified.