Stanford Introduces it’s Human-Centered AI Initiative

A common goal for the brightest minds from Stanford and beyond: putting humanity at the center of AI.

— Fei-Fei Li & John Etchemendy [http://hai.stanford.edu]

Humanity: The Next Frontier in AI

We have arrived at a truly historic turning point: Society is being reshaped by technology faster and more profoundly than ever before. Many are calling it the fourth industrial revolution, driven by technologies ranging from 5G wireless to 3D printing to the Internet of Things. But increasingly, the most disruptive changes can be traced to the emergence of Artificial Intelligence.

Many of these changes are inspiring. Machine translation is making it easier for ideas to cross language barriers; computer vision is making medical diagnoses more accurate; and driver-assist features have made cars safer. Other changes are more worrisome: Millions face job insecurity as automation rapidly evolves; AI-generated content makes it increasingly difficult to tell fact from fiction; and recent examples of bias in machine learning have shown us how easily our technology can amplify prejudice and inequality.

Like any powerful tool, AI promises risk and reward in equal measure. But unlike most “dual-use” technologies, such as nuclear energy and biotech, the development and use of AI is a decentralized, global phenomenon with a relatively low barrier to entry. We can’t control something so diffuse, but there is much we can do to guide it responsibly. This is why the next frontier in AI cannot simply be technological—it must be humanistic as well.

The Stanford Human-Centered AI Initiative (HAI)

Many causes warrant our concern, from climate change to poverty, but there is something especially salient about AI: Although the full scope of its impact is a matter of uncertainty, it remains well within our collective power to shape it. That’s why Stanford University is announcing a major new initiative to create an institute dedicated to guiding the future of AI. It will support the necessary breadth of research across disciplines; foster a global dialogue among academia, industry, government, and civil society; and encourage responsible leadership in all sectors. We call this perspective Human-Centered AI, and it flows from three simple but powerful ideas:

  1. For AI to better serve our needs, it must incorporate more of the versatility, nuance, and depth of the human intellect.
  2. The development of AI should be paired with an ongoing study of its impact on human society, and guided accordingly.
  3. The ultimate purpose of AI should be to enhance our humanity, not diminish or replace it.

Realizing these goals will be among the greatest challenges of our time. Each presents complex technical challenges and will provoke dialogues among engineers, social scientists, and humanists. But this raises some important questions: What are the most pressing problems, who will solve these problems, and where will these dialogues take place?

Human-Centered AI requires a broad, multidisciplinary effort that taps the expertise of an extraordinary range of disciplines, from neuroscience to ethics. Meeting this challenge will require us to take chances exploring uncertain new terrain with no promise of a commercial product. It is far more than an engineering task.

The Essential Role of Academia

This is the domain of pure research. It’s the scientific freedom that allowed hundreds of universities to collaborate internationally to build the Large Hadron Collider—not to make our phones cheaper or our Wi-Fi faster, but to catch the first glimpse of the Higgs boson. It’s how we built the Hubble Telescope and mapped the human genome. Best of all, it’s inclusive; rather than compete for market share, it invites us to work together for the benefit of deeper understanding and knowledge that can be shared.

Even more important, academia is charged with educating the leaders and practitioners of tomorrow across a range of disciplines. The evolution of AI will be a multigenerational journey, and now is the time to instill human-centered values in the technologists, engineers, entrepreneurs, and policy makers who will chart its course in the years to come.

Why Stanford?

Realizing the goals of Human-Centered AI will require cooperation between academia, industry, and governments around the world. No single university will provide all the answers; no single company will define the standards; no single nation will control the technology.

Still, there is a need for a focal point, a center specifically devoted to the principles of Human-Centered AI, capable of rapidly advancing the research frontier and acting as a global clearinghouse for ideas from other universities, industries, and governments. We believe that Stanford is uniquely suited to play this role.

Stanford has been at the forefront of AI since John McCarthy founded the Stanford AI Lab (SAIL) in 1963. McCarthy coined the term “Artificial Intelligence” and set the agenda for much of the early work in the field. In the decades since, SAIL has served as the backdrop for many of AI’s greatest milestones, from pioneering work in expert systems to the first driverless car to navigate the 130-mile DARPA Grand Challenge. SAIL was the home of seminal work in computer vision and the birthplace of ImageNet, which demonstrated the transformative power of large-scale datasets on neural network algorithms. This tradition continues today, with active research by more than 100 doctoral students, as well as many master’s students and undergraduates. Research topics include computer vision, natural language processing, advanced robotics, and computational genomics.

But guiding the future of AI requires expertise far beyond engineering. In fact, the development of Human-Centered AI will draw on nearly every intellectual domain—and this is precisely what makes Stanford the ideal environment to enable it. The Stanford Law School, consistently regarded as one of the world’s most prestigious, brings top legal minds to the debate about the future of ethics and regulation in AI. Stanford’s social science and humanities departments, also among the strongest in the world, bring an understanding of the economic, sociological, political, and ethical implications of AI. Stanford’s Schools of Medicine, Education, and Business will help explore how intelligent machines can best serve the needs of patients, students, and industry. Stanford’s rich tradition of leadership across the disciplinary spectrum will allow us to chart the future of AI around human needs and interests.

Finally, Stanford’s location—both in the heart of Silicon Valley and on the Pacific Rim—places it in close proximity to many of the companies leading the commercial revolution in AI. With deeper roots in Silicon Valley than any other institution, Stanford can both learn from and share insights with the companies most capable of influencing that revolution.

With the Human-Centered AI Initiative, Stanford aspires to become home to a vibrant coalition of thinkers working together to make a greater impact than would be possible on their own. This effort will be organized around five interrelated goals:

  • Catalyze breakthrough, multidisciplinary research.
  • Foster a robust, global ecosystem.
  • Educate and train AI leaders in academia, industry, government, and civil society.
  • Promote real-world actions and policies.
  • And, perhaps most important, stimulate a global dialogue on Human-Centered AI.

In Closing

For decades AI was an academic niche. Then, over just a few years, it emerged as a powerful tool capable of reshaping entire industries. Now the time has come to transform it into something even greater: a force for good. With the right guidance, intelligent machines can bring life-saving diagnostics to the developing world, provide new educational opportunities in underserved communities, and even help us keep a more vigilant eye on the health of the environment. The Stanford Human-Centered AI Initiative is a large-scale effort to make these visions, and many more, a reality. We hope you’ll join us.

Visit http://hai.stanford.edu to find out more

Stanford’s Open Course on Natural Language Processing (NLP)

If you are interested in doing Stanford’s Open Course on Natural Language Processing (NLP), Coursera (coursera.org) have made the full course available on YouTube through 101 video lessons.

The full Stanford NLP Open Course can be found via the following YouTube playlist: https://www.youtube.com/playlist?list=PL4LJlvG_SDpxQAwZYtwfXcQr7kGnl9W93

Here is the Course Introduction (1 – 1):

Presented by Professor Dan Jurafsky & Chris Manning (nlp-class.org), the Natural Language Processing (NLP) course contains the following lessons:

1 – 1 – Course Introduction – Stanford NLP – Professor Dan Jurafsky & Chris Manning
2 – 1 – Regular Expressions – Stanford NLP – Professor Dan Jurafsky & Chris Manning
2 – 2 – Regular Expressions in Practical NLP – Stanford NLP – Professor Dan Jurafsky & Chris Manning
2 – 3 – Word Tokenization- Stanford NLP – Professor Dan Jurafsky & Chris Manning
2 – 4 – Word Normalization and Stemming – Stanford NLP – Professor Dan Jurafsky & Chris Manning
2 – 5 – Sentence Segmentation – Stanford NLP – Professor Dan Jurafsky & Chris Manning
3 – 1 – Defining Minimum Edit Distance – Stanford NLP – Professor Dan Jurafsky & Chris Manning
3 – 2 – Computing Minimum Edit Distance – Stanford NLP – Professor Dan Jurafsky & Chris Manning
3 – 3 – Backtrace for Computing Alignments – Stanford NLP – Professor Dan Jurafsky & Chris Manning
3 – 4 – Weighted Minimum Edit Distance – Stanford NLP – Professor Dan Jurafsky & Chris Manning
3 – 5 – Minimum Edit Distance in Computational Biology-Stanford NLP-Dan Jurafsky & Chris Manning
4 – 1 – Introduction to N-grams- Stanford NLP – Professor Dan Jurafsky & Chris Manning
4 – 2 – Estimating N-gram Probabilities – Stanford NLP – Professor Dan Jurafsky & Chris Manning
4 – 3 – Evaluation and Perplexity – Stanford NLP – Professor Dan Jurafsky & Chris Manning
4 – 4 – Generalization and Zeros – Stanford NLP – Professor Dan Jurafsky & Chris Manning
4 – 5 – Smoothing_ Add-One – Stanford NLP – Professor Dan Jurafsky & Chris Manning
4 – 6 – Interpolation – Stanford NLP – Professor Dan Jurafsky & Chris Manning
4 – 7 – Good-Turing Smoothing – Stanford NLP – Professor Dan Jurafsky & Chris Manning
4 – 8 – Kneser-Ney Smoothing – Stanford NLP – Professor Dan Jurafsky & Chris Manning
5 – 1 – The Spelling Correction Task – Stanford NLP – Professor Dan Jurafsky & Chris Manning
5 – 2 – The Noisy Channel Model of Spelling – Stanford NLP – Professor Dan Jurafsky & Chris Manning
5 – 3 – Real-Word Spelling Correction – Stanford NLP – Professor Dan Jurafsky & Chris Manning
5 – 4 – State of the Art Systems – Stanford NLP – Professor Dan Jurafsky & Chris Manning
6 – 1 – What is Text Classification- Stanford NLP – Professor Dan Jurafsky & Chris Manning
6 – 2 – Naive Bayes – Stanford NLP – Professor Dan Jurafsky & Chris Manning
6 – 3 – Formalizing the Naive Bayes Classifier – Stanford NLP-Dan Jurafsky & Chris Manning
6 – 4 – Naive Bayes_ Learning – Stanford NLP – Professor Dan Jurafsky & Chris Manning
6 – 5 – Naive Bayes_ Relationship to Language Modeling-Stanford NLP-Dan Jurafsky & Chris Manning
6 – 6 – Multinomial Naive Bayes_ A Worked Example – Stanford NLP-Dan Jurafsky & Chris Manning
6 – 7 – Precision, Recall, and the F measure – Stanford NLP – Professor Dan Jurafsky & Chris Manning
6 – 8 – Text Classification_ Evaluation- Stanford NLP – Professor Dan Jurafsky & Chris Manning
6 – 9 – Practical Issues in Text Classification – Stanford NLP-Dan Jurafsky & Chris Manning
7 – 1 – What is Sentiment Analysis- Stanford NLP – Professor Dan Jurafsky & Chris Manning
7 – 2 – Sentiment Analysis_ A baseline algorithm- NLP-Dan Jurafsky & Chris Manning
7 – 3 – Sentiment Lexicons – Stanford NLP – Professor Dan Jurafsky & Chris Manning
7 – 4 – Learning Sentiment Lexicons – Stanford NLP – Professor Dan Jurafsky & Chris Manning
7 – 5 – Other Sentiment Tasks – Stanford NLP – Professor Dan Jurafsky & Chris Manning
8 – 1 – Generative vs. Discriminative Models- Stanford NLP – Professor Dan Jurafsky & Chris Manning
8 – 2 – Making features from text for discriminative NLP models-Dan Jurafsky & Chris Manning
8 – 3 – Feature-Based Linear Classifiers – Stanford NLP – Professor Dan Jurafsky & Chris Manning
8 – 4 – Building a Maxent Model_ The Nuts and Bolts-Dan Jurafsky & Chris Manning
8 – 5 – Generative vs. Discriminative models_ The problem of overcounting evidence- Stanford NLP
8 – 6 – Maximizing the Likelihood- Stanford NLP – Professor Dan Jurafsky & Chris Manning
9 – 1 – Introduction to Information Extraction- Stanford NLP-Dan Jurafsky & Chris Manning
9 – 2 – Evaluation of Named Entity Recognition- Stanford NLP-Dan Jurafsky & Chris Manning
9 – 3 – Sequence Models for Named Entity Recognition-NLP-Professor Dan Jurafsky & Chris Manning
9 – 4 – Maximum Entropy Sequence Models- Stanford NLP – Professor Dan Jurafsky & Chris Manning
10 – 1 – What is Relation Extraction- Stanford NLP – Professor Dan Jurafsky & Chris Manning
10 – 2 – Using Patterns to Extract Relations – Stanford NLP – Professor Dan Jurafsky & Chris Manning
10 – 3 – Supervised Relation Extraction – Stanford NLP – Professor Dan Jurafsky & Chris Manning
10 – 4 – Semi-Supervised and Unsupervised Relation Extraction-Dan Jurafsky & Chris Manning
11 – 1 – The Maximum Entropy Model Presentation-NLP-Dan Jurafsky & Chris Manning
11 – 2 – Feature Overlap_Feature Interaction-Stanford NLP-Professor Dan Jurafsky & Chris Manning
11 – 3 – Conditional Maxent Models for Classification–NLP-Dan Jurafsky & Chris Manning
11 – 4 – Smoothing_Regularization_Priors for Maxent Models-NLP-Dan Jurafsky & Chris Manning
12 – 1 – An Intro to Parts of Speech and POS Tagging -NLP-Dan Jurafsky & Chris Manning
12 – 2 – Some Methods and Results on Sequence Models for POS Tagging -Dan Jurafsky Chris Manning
13 – 1 – Syntactic Structure_ Constituency vs Dependency -NLP-Dan Jurafsky & Chris Manning
13 – 2 – Empirical_Data-Driven Approach to Parsing-NLP-Dan Jurafsky & Chris Manning
14 – 1 – Instructor Chat –NLP-Dan Jurafsky & Chris Manning
15 – 1 – CFGs and PCFGs -Stanford NLP-Professor Dan Jurafsky & Chris Manning
15 – 2 – Grammar Transforms-Stanford NLP-Professor Dan Jurafsky & Chris Manning
15 – 3 – CKY Parsing -Stanford NLP-Professor Dan Jurafsky & Chris Manning
15 – 4 – CKY Example-Stanford NLP-Professor Dan Jurafsky & Chris Manning
15 – 5 – Constituency Parser Evaluation -Stanford NLP-Professor Dan Jurafsky & Chris Manning
16 – 1 – Lexicalization of PCFGs-Stanford NLP-Professor Dan Jurafsky & Chris Manning
16 – 2 – Charniak’s Model-Stanford NLP-Professor Dan Jurafsky & Chris Manning
16 – 3 – PCFG Independence Assumptions-Stanford NLP-Professor Dan Jurafsky & Chris Manning
16 – 4 – The Return of Unlexicalized PCFGs-Stanford NLP-Professor Dan Jurafsky & Chris Manning
16 – 5 – Latent Variable PCFGs-Stanford NLP-Professor Dan Jurafsky & Chris Manning
17 – 1 – Dependency Parsing Introduction-Stanford NLP-Professor Dan Jurafsky & Chris Manning
17 – 2 – Greedy Transition-Based Parsing-Stanford NLP-Professor Dan Jurafsky & Chris Manning
17 – 3 – Dependencies Encode Relational Structure-Stanford NLP-Dan Jurafsky & Chris Manning
18 – 1 – Introduction to Information Retrieval-Stanford NLP-Professor Dan Jurafsky & Chris Manning
18 – 2 – Term-Document Incidence Matrices -Stanford NLP-Professor Dan Jurafsky & Chris Manning
18 – 3 – The Inverted Index-Stanford NLP-Professor Dan Jurafsky & Chris Manning
18 – 4 – Query Processing with the Inverted Index-Stanford NLP-Dan Jurafsky & Chris Manning
18 – 5 – Phrase Queries and Positional Indexes-Stanford NLP-Professor Dan Jurafsky & Chris Manning
19 – 1 – Introducing Ranked Retrieval-Stanford NLP-Professor Dan Jurafsky & Chris Manning
19 – 2 – Scoring with the Jaccard Coefficient-Stanford NLP-Professor Dan Jurafsky & Chris Manning
19 – 3 – Term Frequency Weighting-Stanford NLP-Professor Dan Jurafsky & Chris Manning
19 – 4 – Inverse Document Frequency Weighting-Stanford NLP-Professor Dan Jurafsky & Chris Manning
19 – 5 – TF-IDF Weighting-Stanford NLP-Professor Dan Jurafsky & Chris Manning
19 – 6 – The Vector Space Model -Stanford NLP-Professor Dan Jurafsky & Chris Manning
19 – 7 – Calculating TF-IDF Cosine Scores-Stanford NLP-Professor Dan Jurafsky & Chris Manning
19 – 8 – Evaluating Search Engines -Stanford NLP-Professor Dan Jurafsky & Chris Manning
20 – 1 – Word Senses and Word Relations-NLP-Dan Jurafsky & Chris Manning
20 – 2 – WordNet and Other Online Thesauri -NLP-Dan Jurafsky & Chris Manning
20 – 3 – Word Similarity and Thesaurus Methods -NLP-Dan Jurafsky & Chris Manning
20 – 4 – Word Similarity_ Distributional Similarity I –NLP-Dan Jurafsky & Chris Manning
20 – 5 – Word Similarity_ Distributional Similarity II -NLP-Dan Jurafsky & Chris Manning
21 – 1 – What is Question Answering-NLP-Dan Jurafsky & Chris Manning
21 – 2 – Answer Types and Query Formulation-NLP-Dan Jurafsky & Chris Manning
21 – 3 – Passage Retrieval and Answer Extraction-NLP-Dan Jurafsky & Chris Manning
21 – 4 – Using Knowledge in QA -NLP-Dan Jurafsky & Chris Manning
21 – 5 – Advanced_ Answering Complex Questions-NLP-Dan Jurafsky & Chris Manning
22 – 1 – Introduction to Summarization-NLP-Dan Jurafsky & Chris Manning
22 – 2 – Generating Snippets-NLP-Dan Jurafsky & Chris Manning
22 – 3 – Evaluating Summaries_ ROUGE-NLP-Dan Jurafsky & Chris Manning
22 – 4 – Summarizing Multiple Documents-NLP-Dan Jurafsky & Chris Manning
23 – 1 – Instructor Chat II -Stanford NLP-Professor Dan Jurafsky & Chris Manning

[Stanford NLP Open Course video playlist: https://www.youtube.com/playlist?list=PL4LJlvG_SDpxQAwZYtwfXcQr7kGnl9W93]