# Courses

The schedule clusters one-credit courses in four-week segments to help you focus on specific topics during the 10-months. This is a full-time program. Courses are lab-oriented, and delivered in-person.

The program includes an eight to ten-week capstone project, allowing you to work with other students on real-life data sets and to apply techniques you have learned in the context of a larger set of data and more complex problems.

**Course Descriptions**

Basic programming in R and Python. Overview of data structures, iteration, flow control, and program design relevant to data exploration and analysis. When and how to exploit pre-existing libraries.

Instructors: Patrick Walls and Vincenzo Coia.

How to install, maintain, and use the data scientific software “stack”. The Unix operating system, integrated development environments, and problem solving strategies.

Instructor: Tiffany Timbers.

How to present and interpret data science findings. Drawing on the scholarship of language and cognition, this course is about how effective data scientists write, speak, and think.

Instructor: David Laing.

Fundamental concepts in probability. Statistical view of data coming from a probability distribution.

Instructor: Mike Gelbart.

Converting data from the form in which it is collected to the form needed for analysis. How to clean, filter, arrange, aggregate, and transform diverse data types, e.g. strings, numbers, and date-times.

Instructor: Jenny Bryan.

Exploratory data analysis. Design of effective static visualizations. Plotting tools in R and Python.

Instructor: Vincenzo Coia.

How to choose and use appropriate algorithms and data structures to help solve data science problems. Key concepts such as recursion and algorithmic complexity (e.g., efficiency, scalability).

Instructor: Patrice Belleville.

The statistical and probabilistic foundations of inference, developed jointly through mathematical derivations and simulation techniques. Important distributions and large sample results. Methods for dealing with the multiple testing problem. The frequentist paradigm.

Instructor: Tiffany Timbers.

Linear models for a quantitative response variable, with multiple categorical and/or quantitative predictors. Matrix formulation of linear regression. Model assessment and prediction.

Instructor: Gabriela Cohen Freue.

Interactive vs. scripted/unattended analyses and how to move fluidly between them. Reproducibility through automation and dynamic, literate documents. The use of version control and file organization to enhance machine- and human-readability.

Instructor: Tiffany Timbers.

Introduction to supervised machine learning, with a focus on classification. K-NN, Decision trees, SVM, how to combine models via ensembling: boosting, bagging, random forests. Basic machine learning concepts such as generalization error and overfitting.

Instructor: Mike Gelbart.

How to work with data stored in relational database systems. Storage structures and schemas, data relationships, and ways to query and aggregate such data.

Instructor: Bhav Dillon.

Useful extensions to basic regression, e.g., generalized linear models, mixed effects, smoothing, robust regression, and techniques for dealing with missing data.

Instructor: Vincenzo Coia.

How to evaluate and select features and models. Cross-validation, ROC curves, feature engineering, and regularization.

Instructor: Mark Schmidt.

How to find groups and other structure in unlabeled, possibly high dimensional data. Dimension reduction for visualization and data analysis. Clustering, association rules, model fitting via the EM algorithm.

Instructor: TBD.

How to exploit practices from collaborative software development techniques in data scientific workflows. Appropriate use of abstraction and classes, the software life cycle, unit testing / continuous integration, and packaging for use by others.

Instructor: Meghan Allen.

The legal, ethical, and security issues concerning data, including aggregated data. Proactive compliance with rules and, in their absence, principles for the responsible management of sensitive data. Case studies.

Instructor: Ed Knorr.

Introduction to optimization. Gradient descent and stochastic gradient descent. Roundoff error and finite differences. Neural networks and deep learning.

Instructor: Mike Gelbart.

How to use the web as a platform for data collection, computation, and publishing. Accessing data via scraping and APIs. Using the cloud for tasks that are beyond the capability of your local computing resources.

Instructor: Mike Feeley.

Bayesian reasoning for data science. How to formulate and implement inference using the prior-to-posterior paradigm.

Instructor: Alexandre Bouchard-Cote.

Advanced machine learning methods, with an undercurrent of natural language processing (NLP) applications. Bag of words, recommender systems, topic models, natural language as sequence data, Markov chains, and RNNs for text synthesis. An introduction to popular NLP libraries in Python.

Instructor: TBD.

Model fitting and prediction in the presence of correlation due to temporal and/or spatial association. ARIMA models.

Instructor: Natalia Nolde.

Statistical evidence from randomized experiments versus observational studies. Applications of randomization, e.g., A/B testing for website optimization.

Instructor: Paul Gustafson.

How to make principled and effective choices with respect to marks, spatial arrangement, and colour. Analysis, design, and implementation of interactive figures. How to provide multiple views, deal with complexity, and make difficult decisions about data reduction.

Instructor: Cydney Nielsen.

A mentored group project based on real data and questions from a partner within or outside the university. Students will formulate questions and design and execute a suitable analysis plan. The group will work collaboratively to produce a project report, presentation, and possibly other products, such as a web application.

Instructors: MDS Staff.