Machine Learning With Stanford University

Stanford University are holding a Machine Learning course with coursera.org.

Machine learning is the science of getting computers to act without being explicitly programmed. In the past decade, machine learning has given us self-driving cars, practical speech recognition, effective web search, and a vastly improved understanding of the human genome. Machine learning is so pervasive today that you probably use it dozens of times a day without knowing it. Many researchers also think it is the best way to make progress towards human-level AI. In this class, you will learn about the most effective machine learning techniques, and gain practice implementing them and getting them to work for yourself. More importantly, you’ll learn about not only the theoretical underpinnings of learning, but also gain the practical know-how needed to quickly and powerfully apply these techniques to new problems. Finally, you’ll learn about some of Silicon Valley’s best practices in innovation as it pertains to machine learning and AI.

This course provides a broad introduction to machine learning, datamining, and statistical pattern recognition. Topics include: (i) Supervised learning (parametric/non-parametric algorithms, support vector machines, kernels, neural networks). (ii) Unsupervised learning (clustering, dimensionality reduction, recommender systems, deep learning). (iii) Best practices in machine learning (bias/variance theory; innovation process in machine learning and AI). The course will also draw from numerous case studies and applications, so that you’ll also learn how to apply learning algorithms to building smart robots (perception, control), text understanding (web search, anti-spam), computer vision, medical informatics, audio, database mining, and other areas.

Can I earn a Course Certificate if I completed this course before they were available?
In order to verify one’s identity and maintain academic integrity, learners who completed assignments or quizzes for Machine Learning prior to November 1st will need to redo and resubmit these assessments in order to earn a Course Certificate. To clarify, both quizzes and programming assignments need to be resubmitted. Though your deadlines may have technically passed, please be assured that you may resubmit both types of assessments at any time. We apologise for the inconvenience and appreciate your patience as we strive to ensure the integrity and value of our certificates.

Please note that, in order to earn a Course Certificate, you must complete the course within 180 days of payment, or by May 1, 2016, whichever is earlier.

Enrolment ends February 27

Stanford’s Open Course on Natural Language Processing (NLP)

If you are interested in doing Stanford’s Open Course on Natural Language Processing (NLP), Coursera (coursera.org) have made the full course available on YouTube through 101 video lessons.

The full Stanford NLP Open Course can be found via the following YouTube playlist: https://www.youtube.com/playlist?list=PL4LJlvG_SDpxQAwZYtwfXcQr7kGnl9W93

Here is the Course Introduction (1 – 1):

Presented by Professor Dan Jurafsky & Chris Manning (nlp-class.org), the Natural Language Processing (NLP) course contains the following lessons:

1 – 1 – Course Introduction – Stanford NLP – Professor Dan Jurafsky & Chris Manning
2 – 1 – Regular Expressions – Stanford NLP – Professor Dan Jurafsky & Chris Manning
2 – 2 – Regular Expressions in Practical NLP – Stanford NLP – Professor Dan Jurafsky & Chris Manning
2 – 3 – Word Tokenization- Stanford NLP – Professor Dan Jurafsky & Chris Manning
2 – 4 – Word Normalization and Stemming – Stanford NLP – Professor Dan Jurafsky & Chris Manning
2 – 5 – Sentence Segmentation – Stanford NLP – Professor Dan Jurafsky & Chris Manning
3 – 1 – Defining Minimum Edit Distance – Stanford NLP – Professor Dan Jurafsky & Chris Manning
3 – 2 – Computing Minimum Edit Distance – Stanford NLP – Professor Dan Jurafsky & Chris Manning
3 – 3 – Backtrace for Computing Alignments – Stanford NLP – Professor Dan Jurafsky & Chris Manning
3 – 4 – Weighted Minimum Edit Distance – Stanford NLP – Professor Dan Jurafsky & Chris Manning
3 – 5 – Minimum Edit Distance in Computational Biology-Stanford NLP-Dan Jurafsky & Chris Manning
4 – 1 – Introduction to N-grams- Stanford NLP – Professor Dan Jurafsky & Chris Manning
4 – 2 – Estimating N-gram Probabilities – Stanford NLP – Professor Dan Jurafsky & Chris Manning
4 – 3 – Evaluation and Perplexity – Stanford NLP – Professor Dan Jurafsky & Chris Manning
4 – 4 – Generalization and Zeros – Stanford NLP – Professor Dan Jurafsky & Chris Manning
4 – 5 – Smoothing_ Add-One – Stanford NLP – Professor Dan Jurafsky & Chris Manning
4 – 6 – Interpolation – Stanford NLP – Professor Dan Jurafsky & Chris Manning
4 – 7 – Good-Turing Smoothing – Stanford NLP – Professor Dan Jurafsky & Chris Manning
4 – 8 – Kneser-Ney Smoothing – Stanford NLP – Professor Dan Jurafsky & Chris Manning
5 – 1 – The Spelling Correction Task – Stanford NLP – Professor Dan Jurafsky & Chris Manning
5 – 2 – The Noisy Channel Model of Spelling – Stanford NLP – Professor Dan Jurafsky & Chris Manning
5 – 3 – Real-Word Spelling Correction – Stanford NLP – Professor Dan Jurafsky & Chris Manning
5 – 4 – State of the Art Systems – Stanford NLP – Professor Dan Jurafsky & Chris Manning
6 – 1 – What is Text Classification- Stanford NLP – Professor Dan Jurafsky & Chris Manning
6 – 2 – Naive Bayes – Stanford NLP – Professor Dan Jurafsky & Chris Manning
6 – 3 – Formalizing the Naive Bayes Classifier – Stanford NLP-Dan Jurafsky & Chris Manning
6 – 4 – Naive Bayes_ Learning – Stanford NLP – Professor Dan Jurafsky & Chris Manning
6 – 5 – Naive Bayes_ Relationship to Language Modeling-Stanford NLP-Dan Jurafsky & Chris Manning
6 – 6 – Multinomial Naive Bayes_ A Worked Example – Stanford NLP-Dan Jurafsky & Chris Manning
6 – 7 – Precision, Recall, and the F measure – Stanford NLP – Professor Dan Jurafsky & Chris Manning
6 – 8 – Text Classification_ Evaluation- Stanford NLP – Professor Dan Jurafsky & Chris Manning
6 – 9 – Practical Issues in Text Classification – Stanford NLP-Dan Jurafsky & Chris Manning
7 – 1 – What is Sentiment Analysis- Stanford NLP – Professor Dan Jurafsky & Chris Manning
7 – 2 – Sentiment Analysis_ A baseline algorithm- NLP-Dan Jurafsky & Chris Manning
7 – 3 – Sentiment Lexicons – Stanford NLP – Professor Dan Jurafsky & Chris Manning
7 – 4 – Learning Sentiment Lexicons – Stanford NLP – Professor Dan Jurafsky & Chris Manning
7 – 5 – Other Sentiment Tasks – Stanford NLP – Professor Dan Jurafsky & Chris Manning
8 – 1 – Generative vs. Discriminative Models- Stanford NLP – Professor Dan Jurafsky & Chris Manning
8 – 2 – Making features from text for discriminative NLP models-Dan Jurafsky & Chris Manning
8 – 3 – Feature-Based Linear Classifiers – Stanford NLP – Professor Dan Jurafsky & Chris Manning
8 – 4 – Building a Maxent Model_ The Nuts and Bolts-Dan Jurafsky & Chris Manning
8 – 5 – Generative vs. Discriminative models_ The problem of overcounting evidence- Stanford NLP
8 – 6 – Maximizing the Likelihood- Stanford NLP – Professor Dan Jurafsky & Chris Manning
9 – 1 – Introduction to Information Extraction- Stanford NLP-Dan Jurafsky & Chris Manning
9 – 2 – Evaluation of Named Entity Recognition- Stanford NLP-Dan Jurafsky & Chris Manning
9 – 3 – Sequence Models for Named Entity Recognition-NLP-Professor Dan Jurafsky & Chris Manning
9 – 4 – Maximum Entropy Sequence Models- Stanford NLP – Professor Dan Jurafsky & Chris Manning
10 – 1 – What is Relation Extraction- Stanford NLP – Professor Dan Jurafsky & Chris Manning
10 – 2 – Using Patterns to Extract Relations – Stanford NLP – Professor Dan Jurafsky & Chris Manning
10 – 3 – Supervised Relation Extraction – Stanford NLP – Professor Dan Jurafsky & Chris Manning
10 – 4 – Semi-Supervised and Unsupervised Relation Extraction-Dan Jurafsky & Chris Manning
11 – 1 – The Maximum Entropy Model Presentation-NLP-Dan Jurafsky & Chris Manning
11 – 2 – Feature Overlap_Feature Interaction-Stanford NLP-Professor Dan Jurafsky & Chris Manning
11 – 3 – Conditional Maxent Models for Classification–NLP-Dan Jurafsky & Chris Manning
11 – 4 – Smoothing_Regularization_Priors for Maxent Models-NLP-Dan Jurafsky & Chris Manning
12 – 1 – An Intro to Parts of Speech and POS Tagging -NLP-Dan Jurafsky & Chris Manning
12 – 2 – Some Methods and Results on Sequence Models for POS Tagging -Dan Jurafsky Chris Manning
13 – 1 – Syntactic Structure_ Constituency vs Dependency -NLP-Dan Jurafsky & Chris Manning
13 – 2 – Empirical_Data-Driven Approach to Parsing-NLP-Dan Jurafsky & Chris Manning
14 – 1 – Instructor Chat –NLP-Dan Jurafsky & Chris Manning
15 – 1 – CFGs and PCFGs -Stanford NLP-Professor Dan Jurafsky & Chris Manning
15 – 2 – Grammar Transforms-Stanford NLP-Professor Dan Jurafsky & Chris Manning
15 – 3 – CKY Parsing -Stanford NLP-Professor Dan Jurafsky & Chris Manning
15 – 4 – CKY Example-Stanford NLP-Professor Dan Jurafsky & Chris Manning
15 – 5 – Constituency Parser Evaluation -Stanford NLP-Professor Dan Jurafsky & Chris Manning
16 – 1 – Lexicalization of PCFGs-Stanford NLP-Professor Dan Jurafsky & Chris Manning
16 – 2 – Charniak’s Model-Stanford NLP-Professor Dan Jurafsky & Chris Manning
16 – 3 – PCFG Independence Assumptions-Stanford NLP-Professor Dan Jurafsky & Chris Manning
16 – 4 – The Return of Unlexicalized PCFGs-Stanford NLP-Professor Dan Jurafsky & Chris Manning
16 – 5 – Latent Variable PCFGs-Stanford NLP-Professor Dan Jurafsky & Chris Manning
17 – 1 – Dependency Parsing Introduction-Stanford NLP-Professor Dan Jurafsky & Chris Manning
17 – 2 – Greedy Transition-Based Parsing-Stanford NLP-Professor Dan Jurafsky & Chris Manning
17 – 3 – Dependencies Encode Relational Structure-Stanford NLP-Dan Jurafsky & Chris Manning
18 – 1 – Introduction to Information Retrieval-Stanford NLP-Professor Dan Jurafsky & Chris Manning
18 – 2 – Term-Document Incidence Matrices -Stanford NLP-Professor Dan Jurafsky & Chris Manning
18 – 3 – The Inverted Index-Stanford NLP-Professor Dan Jurafsky & Chris Manning
18 – 4 – Query Processing with the Inverted Index-Stanford NLP-Dan Jurafsky & Chris Manning
18 – 5 – Phrase Queries and Positional Indexes-Stanford NLP-Professor Dan Jurafsky & Chris Manning
19 – 1 – Introducing Ranked Retrieval-Stanford NLP-Professor Dan Jurafsky & Chris Manning
19 – 2 – Scoring with the Jaccard Coefficient-Stanford NLP-Professor Dan Jurafsky & Chris Manning
19 – 3 – Term Frequency Weighting-Stanford NLP-Professor Dan Jurafsky & Chris Manning
19 – 4 – Inverse Document Frequency Weighting-Stanford NLP-Professor Dan Jurafsky & Chris Manning
19 – 5 – TF-IDF Weighting-Stanford NLP-Professor Dan Jurafsky & Chris Manning
19 – 6 – The Vector Space Model -Stanford NLP-Professor Dan Jurafsky & Chris Manning
19 – 7 – Calculating TF-IDF Cosine Scores-Stanford NLP-Professor Dan Jurafsky & Chris Manning
19 – 8 – Evaluating Search Engines -Stanford NLP-Professor Dan Jurafsky & Chris Manning
20 – 1 – Word Senses and Word Relations-NLP-Dan Jurafsky & Chris Manning
20 – 2 – WordNet and Other Online Thesauri -NLP-Dan Jurafsky & Chris Manning
20 – 3 – Word Similarity and Thesaurus Methods -NLP-Dan Jurafsky & Chris Manning
20 – 4 – Word Similarity_ Distributional Similarity I –NLP-Dan Jurafsky & Chris Manning
20 – 5 – Word Similarity_ Distributional Similarity II -NLP-Dan Jurafsky & Chris Manning
21 – 1 – What is Question Answering-NLP-Dan Jurafsky & Chris Manning
21 – 2 – Answer Types and Query Formulation-NLP-Dan Jurafsky & Chris Manning
21 – 3 – Passage Retrieval and Answer Extraction-NLP-Dan Jurafsky & Chris Manning
21 – 4 – Using Knowledge in QA -NLP-Dan Jurafsky & Chris Manning
21 – 5 – Advanced_ Answering Complex Questions-NLP-Dan Jurafsky & Chris Manning
22 – 1 – Introduction to Summarization-NLP-Dan Jurafsky & Chris Manning
22 – 2 – Generating Snippets-NLP-Dan Jurafsky & Chris Manning
22 – 3 – Evaluating Summaries_ ROUGE-NLP-Dan Jurafsky & Chris Manning
22 – 4 – Summarizing Multiple Documents-NLP-Dan Jurafsky & Chris Manning
23 – 1 – Instructor Chat II -Stanford NLP-Professor Dan Jurafsky & Chris Manning

[Stanford NLP Open Course video playlist: https://www.youtube.com/playlist?list=PL4LJlvG_SDpxQAwZYtwfXcQr7kGnl9W93]

AI / Artificial Intelligence

Artificial intelligence (AI) is the intelligence exhibited by machines or software. It is also the name of the academic field of study which studies how to create computers and computer software that are capable of intelligent behaviour. Major AI researchers and textbooks define this field as “the study and design of intelligent agents”, in which an intelligent agent is a system that perceives its environment and takes actions that maximise its chances of success. John McCarthy, who coined the term in 1955, defines it as “the science and engineering of making intelligent machines”.

AI research is highly technical and specialised, and is deeply divided into subfields that often fail to communicate with each other. Some of the division is due to social and cultural factors: subfields have grown up around particular institutions and the work of individual researchers. AI research is also divided by several technical issues. Some subfields focus on the solution of specific problems. Others focus on one of several possible approaches or on the use of a particular tool or towards the accomplishment of particular applications.

The central problems (or goals) of AI research include reasoning, knowledge, planning, learning, natural language processing (communication), perception and the ability to move and manipulate objects. General intelligence is still among the field’s long-term goals. Currently popular approaches include statistical methods, computational intelligence and traditional symbolic AI. There are a large number of tools used in AI, including versions of search and mathematical optimisation, logic, methods based on probability and economics, and many others. The AI field is interdisciplinary, in which a number of sciences and professions converge, including computer science, mathematics, psychology, linguistics, philosophy and neuroscience, as well as other specialised fields such as artificial psychology.

The field was founded on the claim that a central property of humans, human intelligence—the sapience of Homo sapiens—”can be so precisely described that a machine can be made to simulate it.” This raises philosophical issues about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence, issues which have been addressed by myth,fiction and philosophy since antiquity. Artificial intelligence has been the subject of tremendous optimism but has also suffered stunning setbacks. Today it has become an essential part of the technology industry, providing the heavy lifting for many of the most challenging problems in computer science.

[Snippet updated from wikipedia.org – 16th September 2015: https://en.wikipedia.org/wiki/Artificial_intelligence]