Introduction - Computer Science

Introduction - Computer Science

COMP 875: Introductions Name, year, research area/group Why are you interested in machine learning and how does it relate to your research? What topics would you like to see covered in this course? What is Machine Learning? Using past experiences to improve future performance (on a particular task) For a machine, experiences come in the form of data

What does it mean to improve performance? Learning is guided by a quantitative objective, associated with a particular notion of loss to be minimized (or gain to be maximized) Why machine learning? Often it is too difficult to design a set of rules by hand Machine learning is about automatically extracting relevant information from data and applying it to analyze new data Source: G. Shakhnarovich

Machine Learning Steps Data collection: Start with training data for which we know the correct outcome provided by a teacher Representation: Decide how to encode the input to the learning program Modeling: Choose a hypothesis class a set of possible explanations for the data Estimation: Find best hypothesis you can in the chosen class Model selection: We may reconsider the class of hypotheses given the outcome

Each of these steps can make or break the learning outcome Source: G. Shakhnarovich Learning and Probability There are many sources of uncertainty with which learning algorithms must cope: Variability of the data Dataset collection Measurement noise Labeling errors

Probability and statistics provide an appropriate framework to deal with uncertainty Some basic statistical assumptions: Training data is sampled from the true underlying data distribution Future test data will be sampled from the same distribution Source: G. Shakhnarovich Example of a learning problem

Given: training images and their categories What are the categories of these test images? Possible representation: image of size nn pixels vector of length n2 (or 3n2 if color) Source: G. Shakhnarovich The Importance of Representation Dimensionality

Beyond vectors: complex or heterogeneous input objects Web pages Program traces Images with captions or metadata Video with sound Proteins Feature extraction and feature selection What measurements/information about the input objects are the most useful for solving the given problem? Successful representation requires domain knowledge! If we could find the ideal feature representation, we

would not even need learning! Types of learning problems Supervised Classification Regression Unsupervised Semi-supervised Reinforcement learning Active learning .

Supervised learning Given training examples of inputs and corresponding outputs, produce the correct outputs for new inputs Two main scenarios: Classification: outputs are discrete variables (category labels). Learn a decision boundary that separates one class from the other Regression: also known as curve fitting or function approximation. Learn a continuous input-output mapping from examples (possibly

noisy) Regression: example 1 Suppose we want to predict gas mileage of a car based on some characteristics: number of cylinders or doors, weight, horsepower, year etc. Source: G. Shakhnarovich Regression: example 2 Training set: faces (represented as vectors of distances

between keypoints) together with experimentally obtained attractiveness rankings Learn: function to reproduce attractiveness ranking based on training inputs and outputs Attractiveness score f(v) Vector of distances v T. Leyvand, D. Cohen-Or, G. Dror, and D. Lischinski, Data-driven enhancement of facial attractiveness, SIGGRAPH 2008

Regression: example 3 Input: scalar (attractiveness score) Output: vector-valued object (face) B. Davis and S. Lazebnik, Analysis of Human Attractiveness Using Manifold Kernel Regression, ICIP 2008 Regression: example 4 Input: scalar (age) Output: vector-valued object (3D brain image)

B. C. Davis, P. T. Fletcher, E. Bullitt and S. Joshi, "Population Shape Regression From Random Design Data", ICCV, 2007. Structured Prediction Image Word Source: B. Taskar

Structured Prediction Sentence Parse tree Source: B. Taskar Structured Prediction Sentence in two

languages Word alignment Source: B. Taskar Structured Prediction Amino-acid sequence Bond structure

Source: B. Taskar Structured Prediction Many image-based inference tasks can loosely be thought of as structured prediction Data association problem model Source: D. Ramanan

Other supervised learning scenarios Learning similarity functions from relations between multiple input objects Pairwise constraints Source: X. Sui, K. Grauman Other supervised learning scenarios Learning similarity functions from relations between

multiple input objects Triplet constraints Source: X. Sui, K. Grauman Unsupervised Learning Given only unlabeled data as input, learn some sort of structure The objective is often more vague or subjective than in supervised learning. This is more of an exploratory/

descriptive data analysis Unsupervised Learning Clustering Discover groups of similar data points Unsupervised Learning Quantization Map a continuous input to a discrete (more compact) output

2 1 3 Unsupervised Learning Dimensionality reduction, manifold learning Discover a lower-dimensional surface on which the data lives Unsupervised Learning Density estimation

Find a function that approximates the probability density of the data (i.e., value of the function is high for typical points and low for atypical points) Can be used for anomaly detection Other types of learning Semi-supervised learning: lots of data is available, but only small portion is labeled (e.g. since labeling is expensive) Other types of learning

Semi-supervised learning: lots of data is available, but only small portion is labeled (e.g. since labeling is expensive) Why is learning from labeled and unlabeled data better than learning from labeled data alone? ? Other types of learning Active learning: the learning algorithm can choose its own training examples, or ask a teacher for an

answer on selected inputs S. Vijayanarasimhan and K. Grauman, Cost-Sensitive Active Visual Category Learning, 2009 Other types of learning Reinforcement learning: an agent takes inputs from the environment, and takes actions that affect the environment. Occasionally, the agent gets a scalar reward or punishment. The goal is to learn to produce action sequences that maximize the expected reward

(e.g. driving a robot without bumping into obstacles) Apprenticeship learning: learning from demonstrations when the reward function is initially unknown Autonomous helicopter flight: Pieter Abbeel http://heli.stanford.edu/ Generalization The ultimate goal is to do as well as possible on new, unseen data (a test set), but we only have access to labels (ground truth) for the training set

What makes generalization possible? Inductive bias: set of assumptions a learner uses to predict the target value for previously unseen inputs This is the same as modeling or choosing a target hypothesis class Types of inductive bias Occams razor Similarity/continuity bias: similar inputs should have similar outputs

Achieving good generalization Consideration 1: Bias How well does your model fit the observed data? It may be a good idea to accept some fitting error, because it may be due to noise or other accidental characteristics of one particular training set Consideration 2: Variance How robust is the model to the selection of a particular training set? To put it differently, if we learn models on two

different training sets, how consistent will the models be? Bias/variance tradeoff Models with too many parameters may fit the training data well (low bias), but are sensitive to choice of training set (high variance) Bias/variance tradeoff Models with too few many

parameters parameters may may notfitfitthe the training data data (high well well (low

bias) bias), but are butconsistent are sensitive across to choice different of training sets set (high

(low variance) 2 Bias/variance tradeoff Models with too few many parameters parameters may may

notfitfitthe the training data data (high well well (low bias) bias), but are butconsistent

are sensitive across to choice different of training sets set (high (low variance) Generalization error is due to underfitting overfitting

2 Underfitting and overfitting How to recognize underfitting? High training error and high test error How to deal with underfitting? Find a more complex model How to recognize overfitting? Low training error, but high test error How to deal with overfitting? Get more training data

Decrease the number of parameters in your model Regularization: penalize certain parts of the parameter space or introduce additional constraints to deal with a potentially ill-posed problem Methodology Distinction between training and testing is crucial Correct performance on training set is just memorization! Strictly speaking, the researcher should never look at the test data when designing the system

Generalization performance should be evaluated on a hold-out or validation set Raises some troubling issues for learning benchmarks Source: R. Parr Next time The math begins Guest lecturer: Max Raginsky (Duke EE) Reading lists due to me by email by the end of next

Thursday, September 3rd A couple of sentences describing your topic A list of ~3 papers (doesnt have to be final) Date constraints/preferences If you have more than one idea, send them all (will help with conflict resolution)

Recently Viewed Presentations

  • Roy Brown PhD., (All slides copyright 2009) Individual

    Roy Brown PhD., (All slides copyright 2009) Individual

    Family Quality of Life & Older-Aged Families of Adults with an Intellectual Disability By Nancy S. Jokinen and Roy Brown (In Press) In R. Kober (Ed) Enhancing the quality of life of people with intellectual disability: From theory to practice...
  • So long Humberview High…..

    So long Humberview High…..

    EXIT Interviews. All Grade 12 students will be required to have an EXIT Interview with their Guidance Counsellor. Complete EXIT Interview form prior . Pick up your University PIN. Discuss Post-Secondary Pathways . Please make an appointment in
  • Std's in new jersey and gay or bisexual men - 2016

    Std's in new jersey and gay or bisexual men - 2016

    Cdc Recommendations for testing of gay/bisexual men. The following screening tests should be performed at LEAST annually for sexually active MSM, including those with HIV infection: HIV serology, if HIV status is unknown or negative and the patient himself or...
  • Reconstruction - Greenwood School District 50 / Overview
  • Maintaining your F-1 Visa Status - Texas State University

    Maintaining your F-1 Visa Status - Texas State University

    iTunes Gift Card Scam Ask for credit or debit card numbers over the phone. Threaten to bring in local police or other law-enforcement groups to have you arrested for not paying.
  • Static/Dynamic Analysis: Past, Present and Future

    Static/Dynamic Analysis: Past, Present and Future

    Dynamic Analysis: good at checking assertions inserted by programmer Bad at knowing what properties to check Automatic inference techniques Grand Challenge Repository No good techniques for combining static and dynamic analyses The Future: Predictions for 2015 Software vendor will lose...
  • Combination Strategy for the AAV-SMN1 Program

    Combination Strategy for the AAV-SMN1 Program

    Outline of Historical Data Borrowing Approach [4] Proposed decision framework at the interim: Use either caliper or optimal matching to bring in N=50 subjects in HC. Evaluate the probability of acceptance at final. If it is ≥ the pre-specified probability...
  • Bin-Carver: Automatic recovery of binary executable files

    Bin-Carver: Automatic recovery of binary executable files

    Scanner -1 Each header is 52k and contains: program header table (PHT) array of program headers section header table (SHT) array of section headers Scanner -2 usually located at end of ELF file can serve as a footer because of...