Workshop on Machine Learning
Machine learning is a kind of computerized reasoning (AI) that gives PCs the capacity to learn without being unequivocally modified. Machine learning centers around the advancement of PC programs that can change when presented to new information.
The procedure of machine learning is like that of information mining. The two frameworks scan through information to search for designs. In any case, rather than separating information for human perception - just like the case in information mining applications - machine learning utilizes that information to distinguish designs in information and change program activities as needs be.
Facebook's News Feed utilizes machine figuring out how to customize every part's encourage.
Topics Covered in our Workshop :
- Definition of learning systems. Goals and applications of machine learning. Aspects of developing a learning system: training data, concept representation, function approximation.
- The concept learning task. Concept learning as search through a hypothesis space. General-to-specific ordering of hypotheses.
Decision Tree Learning
- Representing concepts as decision trees. Recursive induction of decision trees. Picking the best splitting attribute: entropy and information gain. Searching for simple trees and computational complexity. Occam's razor. Over fitting, noisy data, and pruning.
- Using committees of multiple hypotheses. Bagging, boosting, and DECORATE. Active learning with ensembles.
Experimental Evaluation of Learning Algorithms
- Measuring the accuracy of learned hypotheses. Comparing learning algorithms: cross-validation, learning curves, and statistical hypothesis testing.
Computational Learning Theory
- Models of learn ability: learning in the limit; probably approximately correct (PAC) learning. Sample complexity: quantifying the number of examples needed to PAC learn.
Rule Learning: Propositional and First-Order
- Translating decision trees into rules. Heuristic rule induction using separate and conquer and information gain.
Artificial Neural Networks
- Neurons and biological motivation. Linear threshold units. Perceptions: representational limitation and gradient descent training.
Support Vector Machines
- Maximum margin linear separators. Quadractic programming solution to finding maximum margin separators. Kernels for learning non-linear functions.
- Probability theory and Bayes rule. Naive Bayes learning algorithm.
- Constructing explicit generalizations versus comparing to past specific examples. K-Nearest-neighbor algorithm. Case-based learning.
- Bag of words representation. Vector space model and cosine similarity. Relevance feedback and Rocchio algorithm.
Clustering and Unsupervised Learning
- Learning from unclassified data. Clustering. Hierarchical Aglomerative Clustering. K-means partitioned clustering.
- Classification problems in language: word-sense disambiguation, sequence labeling. Hidden Markov models (HMM's).
Eligibility Criteria :
As we are conducting a very basic level workshop so no specific criteria is defined anyone willing to do career in animation or having interest in the same are welcomed for the Workshop
Workshop duration will be two back to back days with eight hour session each day. Each day is divided in proper theory and hands on practical session.