The proposed schedule clusters one-credit courses in two and four-week segments to help you focus on specific topics during the 10-month. This is full-time program. Courses are lab-oriented, and largely delivered in-person.
The program includes an eight-week capstone project, allowing you to work with other students on real-life data sets and to apply techniques you have learned in the context of a larger set of data and more complex problems.
See below for a list of the courses or view more detailed course descriptions.
Overview of data structures, iteration, flow control, and program design relevant to data exploration and analysis. When and how to exploit pre-existing libraries.
How to choose and use appropriate algorithms and data structures to help solve data science problems. Key concepts such as recursion and algorithmic complexity (e.g., efficiency, scalability).
How to work with data stored in relational database systems or in formats utilizing markup languages. Storage structures and schemas, data relationships, and ways to query and aggregate such data.
How to install, maintain, and use the data scientific software "stack". The Unix operating system, integrated development environments, and problem solving strategies.
Interactive vs. scripted/unattended analyses and how to move fluidly between them. Reproducibility through automation and dynamic, literate documents. The use of version control and file organization to enhance machine- and human-readability.
Converting data from the form in which it is collected to the form needed for analysis. How to clean, filter, arrange, aggregate, and transform diverse data types, e.g. strings, numbers, and date-times.
How to exploit practices from collaborative software development techniques in data scientific workflows. Appropriate use of abstraction and classes, the software life cycle, unit testing / continuous integration, and packaging for use by others.
How to use the web as a platform for data collection, computation, and publishing. Accessing data via scraping and APIs. Using the cloud for tasks that are beyond the capability of your local computing resources.
The design and implementation of static figures across all phases of data analysis, from ingest and cleaning to description and inference. How to make principled and effective choices with respect to marks, spatial arrangement, and colour.
Analysis, design, and implementation of interactive figures. How to provide multiple views, deal with complexity, and make difficult decisions about data reduction.
The legal, ethical, and security issues concerning data, including aggregated data. Proactive compliance with rules and, in their absence, principles for the responsible management of sensitive data. Case studies.
Effective oral and written communication, across diverse target audiences, to facilitate understanding and decision-making. How to present and interpret data, with productive skepticism and an awareness of assumptions and bias.
Describing data in terms of its location, spread, and general distribution. How to balance the use of procedures from classical, parametric statistics with robust approaches that account for outliers and missing data.
The statistical and probabilistic foundations of inference, developed jointly through mathematical derivations and simulation techniques. Important distributions and large sample results. The frequentist paradigm.
Methods for dealing with the multiple testing problem. Bayesian reasoning for data science. How to formulate and implement inference using the prior-to-posterior paradigm.
Statistical evidence from randomized experiments versus observational studies. Applications of randomization, e.g., A/B testing for website optimization.
Linear models for a quantitative response variable, with multiple categorical and/or quantitative predictors. Matrix formulation of linear regression. Model assessment and prediction.
Useful extensions to basic regression, e.g., generalized linear models, mixed effects, smoothing, robust regression, and techniques for dealing with missing data.
How to find groups and other structure in unlabeled, possibly high dimensional data. Dimension reduction for visualization and data analysis. Clustering, association rules, model fitting via the EM algorithm.
Introduction to supervised machine learning, with a focus on classification. Decision trees, logistic regression, and basic machine learning concepts such as generalization error and overfitting.
How to combine models via ensembling: boosting, bagging, random forests. Neural networks and deep learning: state-of-the-art implementation considerations in both software and hardware.
How to evaluate and select features and models. Cross-validation, ROC curves, feature engineering, the role of regularization. Automating these tasks with hyperparameter optimization.
Model fitting and prediction in the presence of correlation due to temporal and/or spatial association. ARIMA models and Gaussian processes.
Introduction to probabilistic graphical models and their applications; systems to generate personalized recommendations; A/B testing and website optimization; text analysis; rapid prototyping of machine learning models.
A mentored group project based on real data and questions from a partner within or outside the university. Students will formulate questions and design and execute a suitable analysis plan. The group will work collaboratively to produce a project report, presentation, and possibly other products, such as a web application.