edx MOOC-Calculus Applied; HarvardX

edx MOOC-Calculus Applied; HarvardX

Apply tools of single-variable calculus to create and analyze mathematical models used by real practitioners in social, life, and physical sciences.

Syllabus

  • Introduction Section (Section 0)
  • Section 1: What Makes a Good Test Question? Mathematical Models to Measure Knowledge and Improve Learning
  • Section 2: Economic Applications of Calculus: Price and Demand in a Tale of Two Cities
  • Section 3: From X-Rays to CT-Scans: Mathematics and Medical Imaging
  • Section 4: What is Middle Income? Thinking about Income Distributions with Statistics and Calculus
  • Section: 5 Population Dynamics Part I: the Evolution of Population Models
  • Section 6: Population Dynamics II: A Biological Puzzle OR How Fishing Affects a Predator-Prey System
  • Section 7: Extinction, Chaos and other Bifurcation Behavior
  • Section 8: Outbreak! Budworm Populations and Bifurcations
  • Section 9: Species in Competition: Coexistence or Exclusion

Đăng kí (free): link

———–&&&———-

DataCamp Course-Quantitative Risk Management in R; Alexander J. McNeil

DataCamp Course-Quantitative Risk Management in R; Alexander J. McNeil

Quantitative Risk Management in R

Course Description

In Quantitative Risk Management (QRM), you will build models to understand the risks of financial portfolios. This is a vital task across the banking, insurance and asset management industries. The first step in the model building process is to collect data on the underlying risk factors that affect portfolio value and analyze their behavior. In this course, you will learn how to work with risk-factor return series, study the empirical properties or so-called “stylized facts” of these data – including their typical non-normality and volatility, and make estimates of value-at-risk for a portfolio.

CHAPTERS:

  1. Exploring market risk-factor data

  2. Real world returns are riskier than normal

  3. Real world returns are volatile and correlated

  4. Estimating portfolio value-at-risk (VaR):

  • Value-at-risk and expected shortfall
  • Computing VaR and ES for normal distribution
  • International equity portfolio
  • Examining risk factors for international equity portfolio
  • Historical simulation
  • Estimating VaR and ES
  • Option portfolio and Black Scholes
  • Compute Black-Scholes price of an option
  • Equity and implied volatility risk factors
  • Historical simulation for the option example
  • Historical simulation of losses for option portfolio
  • Estimating VaR and ES for option portfolio
  • Computing VaR for weekly losses
  • Wrap-up

******************

Đăng ký (free): link

———————–&&&——————–

FutureLearn MOOC-Big Data: Mathematical Modelling; Queensland University of Technology

FutureLearn MOOC-Big Data: Mathematical Modelling; Queensland University of Technology

Learn how to apply selected mathematical modelling methods to analyse big data in this free online course.

Learn how mathematics underpins big data analysis and develop your skills.

Mathematics is everywhere, and with the rise of big data it becomes a useful tool when extracting information and analysing large datasets. We begin by explaining how maths underpins many of the tools that are used to manage and analyse big data. We show how very different applied problems can have common mathematical aims, and therefore can be addressed using similar mathematical tools. We then introduce three such tools, based on a linear algebra framework: eigenvalues and eigenvectors for ranking; graph Laplacian for clustering; and singular value decomposition for data compression.

What topics will you cover?

  • Introduction to key mathematical concepts in big data analytics: eigenvalues and eigenvectors, principal component analysis (PCA), the graph Laplacian, and singular value decomposition (SVD)
  • Application of eigenvalues and eigenvectors to investigate prototypical problems of ranking big data
  • Application of the graph Laplacian to investigate prototypical problems of clustering big data
  • Application of PCA and SVD to investigate prototypical problems of big data compression

*********************

Đăng kí (free): link

————————–&&&————————–

Coursera MOOC-Machine Learning Foundations: A Case Study Approach; University of Washington [with Python]

Coursera MOOC-Machine Learning Foundations: A Case Study Approach; University of Washington

UW_square_180x180.png (72×72)

About this course: Do you have data and wonder what it can tell you? Do you need a deeper understanding of the core ways in which machine learning can improve your business? Do you want to be able to converse with specialists about anything from regression and classification to deep learning and recommender systems?

In this course, you will get hands-on experience with machine learning from a series of practical case-studies. At the end of the first course you will have studied how to predict house prices based on house-level features, analyze sentiment from user reviews, retrieve documents of interest, recommend products, and search for images. Through hands-on practice with these use cases, you will be able to apply machine learning methods in a wide range of domains.

This first course treats the machine learning method as a black box. Using this abstraction, you will focus on understanding tasks of interest, matching these tasks to machine learning tools, and assessing the quality of the output. In subsequent courses, you will delve into the components of this black box by examining models and algorithms. Together, these pieces form the machine learning pipeline, which you will use in developing intelligent applications.

Learning Outcomes: By the end of this course, you will be able to:

  • -Identify potential applications of machine learning in practice.
  • -Describe the core differences in analyses enabled by regression, classification, and clustering.
  • -Select the appropriate machine learning task for a potential application.
  • -Apply regression, classification, clustering, retrieval, recommender systems, and deep learning.
  • -Represent your data as features to serve as input to machine learning models.
  • -Assess the model quality in terms of relevant error metrics for each task.
  • -Utilize a dataset to fit a model to analyze new data.
  • -Build an end-to-end application that uses machine learning at its core.
  • -Implement these techniques in Python.

Syllabus

WEEK 1. Welcome
WEEK 2. Regression: Predicting House Prices (Linear Regression)
WEEK 3. Classification: Analyzing Sentiment (Logistic Regression)
WEEK 4. Clustering and Similarity: Retrieving Documents (k-means, Nearest Neighbors)
WEEK 5. Recommending Products (Matrix factorization)
WEEK 6. Deep Learning: Searching for Images (Neural network, Nearest Neighbors)

*********************

Đăng kí (free): link

———————&&&——————

Coursera MOOC-Practical Machine Learning; Johns Hopkins University [with R]

Coursera MOOC-Practical Machine Learning; Johns Hopkins University

Johns Hopkins University

About this course: One of the most common tasks performed by data scientists and data analysts are prediction and machine learning. This course will cover the basic components of building and applying prediction functions with an emphasis on practical applications. The course will provide basic grounding in concepts such as training and tests sets, overfitting, and error rates. The course will also introduce a range of model based and algorithmic machine learning methods including regression, classification trees, Naive Bayes, and random forests. The course will cover the complete process of building prediction functions including data collection, feature creation, algorithms, and evaluation.

Syllabus

Week 1: Prediction, Errors, and Cross Validation
This week will cover prediction, relative importance of steps, errors, and cross validation.

Week 2: The Caret Package
This week will introduce the caret package, tools for creating features and preprocessing.

Week 3: Predicting with trees, Random Forests, & Model Based Predictions
This week we introduce a number of machine learning algorithms you can use to complete your course project.

Week 4: Regularized Regression and Combining Predictors
This week, we will cover regularized regression and combining predictors.

*********************

Đăng kí (free): link

————–&&&————-

Lagunita MOOC-Statistical Learning; Stanford University [with R]

Lagunita MOOC-Statistical Learning; Stanford University

ABOUT THIS COURSE

This is an introductory-level course in supervised learning, with a focus on regression and classification methods. The syllabus includes:

  • linear and polynomial regression, logistic regression and linear discriminant analysis;
  • cross-validation and the bootstrap, model selection and regularization methods (ridge and lasso);
  • nonlinear models, splines and generalized additive models;
  • tree-based methods, random forests and boosting;
  • support-vector machines.

Some unsupervised learning methods are discussed:

  • principal components and clustering (k-means and hierarchical).

This is not a math-heavy class, so we try and describe the methods without heavy reliance on formulas and complex mathematics. We focus on what we consider to be the important elements of modern data analysis. Computing is done in R. There are lectures devoted to R, giving tutorials from the ground up, and progressing with more detailed sessions that implement the techniques in each chapter.

The lectures cover all the material in An Introduction to Statistical Learning, with Applications in R by James, Witten, Hastie and Tibshirani (Springer, 2013). The pdf for this book is available for free on the book website.

*************

Đăng kí (free): link


Statistical Learning versus Machine Learning

• Machine learning arose as a subfield of Artificial Intelligence.

• Statistical learning arose as a subfield of Statistics.

• There is much overlap — both fields focus on supervised and unsupervised problems:

  1. Machine learning has a greater emphasis on large scale applications and prediction accuracy.
  2. Statistical learning emphasizes models and their interpretability, and precision and uncertainty.

• But the distinction has become more and more blurred, and there is a great deal of “cross-fertilization”.

• Machine learning has the upper hand in Marketing!


Comparison of methods in Machine Learning

Machine Learning


Unsupervised vs Supervised Learning

Supervised learning methods such as regression and classification. In that setting we observe both a set of features X1, X2, . . . , Xp for each object, as well as a response or outcome variable Y . The goal is then to predict Y using X1, X2, . . . , Xp.

Unsupervised learning, we where observe only the features X1, X2, . . . , Xp. We are not interested in prediction, because we do not have an associated response variable Y. The goal is to discover interesting things about the measurements: is there an informative way to visualize the data? Can we discover subgroups among the variables or among the observations? We discuss two methods: principal components analysis & clustering.

———–&&&———-

Udacity MOOC-Machine Learning for Trading; Georgia Tech [with Python]

Udacity MOOC-Machine Learning for Trading; Georgia Tech

Georgia Institute of Technology

About this Course

This course introduces students to the real world challenges of implementing machine learning based trading strategies including the algorithmic steps from information gathering to market orders. The focus is on how to apply probabilistic machine learning approaches to trading decisions. We consider statistical approaches like linear regression, KNN and regression trees and how to apply them to actual stock trading situations.

What You Will Learn

This course is composed of three mini-courses:

  • Mini-course 1: Manipulating Financial Data in Python
  • Mini-course 2: Computational Investing
  • Mini-course 3: Machine Learning Algorithms for Trading

Each mini-course consists of about 7-10 short lessons. Assignments and projects are interleaved.

Prerequisites and Requirements

Students should have strong coding skills and some familiarity with equity markets. No finance or machine learning experience is assumed.

Note that this course serves students focusing on computer science, as well as students in other majors such as industrial systems engineering, management, or math who have different experiences. All types of students are welcome!

The ML topics might be “review” for CS students, while finance parts will be review for finance students. However, even if you have experience in these topics, you will find that we consider them in a different way than you might have seen before, in particular with an eye towards implementation for trading.

Programming will primarily be in Python. We will make heavy use of numerical computing libraries like NumPy and Pandas.

Why Take This Course

By the end of this course, you should be able to:

  • Understand data structures used for algorithmic trading.
  • Know how to construct software to access live equity data, assess it, and make trading decisions.
  • Understand 3 popular machine learning algorithms and how to apply them to trading problems.
  • Understand how to assess a machine learning algorithm’s performance for time series data (stock price data).
  • Know how and why data mining (machine learning) techniques fail.
  • Construct a stock trading software system that uses current daily data.

Some limitations/constraints:

  • We use daily data. This is not an HFT course, but many of the concepts here are relevant.
  • We don’t interact (trade) directly with the market, but we will generate equity allocations that you could trade if you wanted to.

*********************

Đăng kí (free): link

————–&&&————-