In [5]:
from nbconvert import export_html
In [6]:
def chill(): 
    with open('index.html', 'w') as f: f.write("""---\n---\n"""+export_html('JupyterDay Atlanta.ipynb')[0])


  • 9:00am - 9:15am — Coffee and Donuts provided by Continuum Analytics
  • 9:15am - 10am — Panel discussion Open Source is the Medium of Innovation
  • 10am - 12pm — Case studies
    • 10am - 10:40am — Stephen Welch
    • 10:40am - 11:20am — Chris Calloway
    • 11:20am - 12pm — Paco Nathan
  • 12pm - 1pm — Lunch provided by O'Reilly Media
  • 1pm - 1:40pm — Carol Willing - JupyterHub: A "Thing Explainer" overview
  • 1:40pm - 4:45pm — Workshops coding in several tracks with data sets, demos, code starting-points on GitHub
    • Up and Running: Learning & Deploying The Jupyter Stack
    • BYOC: Bring Your Code Own Code (to Jupyter)
    • Notebooks for Science: Exploration, Publication, and Reproducibility
      • 1:40pm - 2pm fovea: Rob Clewley will talk about a Google Summer of Code Project
      • 2:30pm - 2:45pm Nils Persson
      • 3pm - 3:15pm Eric Hein
      • 3:30pm - 3:45pm David Nicholson
  • 4:15pm - 4:30 pm — Coffee Break
  • 4:30pm - 5pm Steven Silvester - Walkthrough of the New JupyterLab!
  • 5pm - 5:30pm — Town Hall

Open Source is the Medium of Innovation Panel Discussion

Josh Davis
Georgia Tech Research Institute

Joshua L. Davis is a Senior Research Scientist and Division Head of a HPC/Data Analytics at the Georgia Tech Research Institute. Mr. Davis has 10 years of experience in Department of Defense software and is the co-founder of the Military Open Source Software community. Holds an MBA and a B.S. in Computer Science. Additionally a Certified Information Systems Security Professional (CISSP) and Certified Ethical Hacker (C/EH).

Carol Willing
Python Software Foundation, Project Jupyter, Cal Poly

Carol Willing is a Director of the Python Software Foundation, a core developer for Project Jupyter, and a Research Software Engineer at Cal Poly San Luis Obispo.

She's also Geek-In-Residence at Fab Lab San Diego and co-organizes PyLadies San Diego and San Diego Python. She’s an active contributor to open source projects, as a maintainer for OpenHatch and the Anita Borg Institute’s open source projects. Combining a love of nature, the arts, and math with a BSE in Electrical Engineering from Duke and an MS in Management from MIT, she’s enjoyed creating and teaching others for over 20 years.

She recently spoke at Grace Hopper Celebration 2015, PyCon 2015, PyCon Philippines 2016, Write/Speak/Code 2016, and SciPy 2016.

Robert Clewley

Dr. Robert Clewley is a polymath scientist and educator, specializing in computational science and mathematical modeling. He has published academic articles about the modeling of epilepsy, cancer, cardiology, and biomechanics. His research has been supported by federal grants from NSF and the Army Research Laboratory. Dr. Clewley also develops the open source PyDSTool modeling software that is used internationally in many scientific and engineering fields.

In [17]:


In no particular order.

Chris Calloway

Using Jupyter Dashboard and ipympl for Tropical Storm Emergency Management

ADCIRC is an ocean circulation model with storm surge and inundation byproducts. The model has been run hundreds of times to compile an estimation matrix. In a Jupyter Dashboard application, a map of storm surge from a typical tropical storm is displayed. Using ipwidgets, sliders on the dashboard allow emergency managers to vary the properties of the storm such as wind velocity and landfall location to update the map with likely storm surge computed from the estimation matrix. This dashboard is the first use of the recently released ipympl, which treats matplotlib figures as an ipwidget.

Steven Silvester
Continuum Analytics

JupyterLab Walkthrough

The audience should get a tour of the latest features of JupyterLab. The audience should learn about the intended path of JupyterLab development and how third party extensions will be added. There is no knowledge requirement, but experience with the existing Jupyter Notebook would be beneficial. The preferred duration would be 25 minutes, plus 5 minutes for questions.

Carol Willing
Python Software Foundation, Project Jupyter, Cal Poly

JupyterHub: A "Thing Explainer" overview

JupyterHub brings the power of Jupyter notebooks to a group of users. University classes, workshops, and user groups can all benefit from notebooks being hosted on a common server. Configuring a JupyterHub server can be complex with many details. This presentation will give the audience an overview of JupyterHub’s architecture and how to install and configure JupyterHub. Inspired by Randall Munroe’s (“xkcd”) Thing Explainer, we’ll break down JupyterHub’s complexity into simple understandable concepts. Whether you are new to JupyterHub or have some experience with it, you will leave this talk empowered to give JupyterHub a try.

Stephen Welch
Wheego Electric Cars, Inc.

Wandering in Four Dimensions

Imaginary and complex numbers are perhaps the most underrated mathematical discovery ever. Functions of these variables are beautiful, elegant, and super useful in mathematics and science. However, as inherently four-dimensional objects, functions of complex variables are notoriously difficult to understand. With the help of the Jupyter Notebook, we’ll visualize our way through the deep and profound world of complex functions and Riemann Surfaces. This talk accompanies the final parts of the YouTube series Imaginary Numbers Are Real.

Paco Nathan
O'Reilly Media

Oriole: a new learning medium based on Jupyter + Docker

O’Reilly Media needed to provide a way for authors to use Jupyter notebooks to create professional publications. We also wanted to integrate video narration in the UX. The result is a unique new learning medium called Oriole, where Jupyter notebooks are used in the middleware, each viewer gets a 100% HTML experience (no download/install needed), the code and video are sync’ed together, and each web session has a Docker container running in the cloud. Tutorials are much quicker to publish than “traditional” books and video. This talk will show examples plus examine the system architecture, built from open source projects. We’ll review feedback from authors working in this medium, i.e., how to teach more effectively through video + notebooks + containers.


Notebooks for Science

David Nicholson
Emory University

Fit Your Learning Curves For Fun and Profit

Scientists that study machine learning often plot the error of a model against the amount of data used to train that model. Such plots are known as learning curves or validation curves. In 1994, Cortes et al. proposed a method for fitting these curves with an exponential decay function. Their method provides a way to predict how different models stack up against each other. Importantly, it can avoid the computationally expensive process of estimating error for large training sets. With help from a Jupyter notebook, I will introduce exponential decay functions and give a brief derivation of Cortes et al.’s method. Then I will demonstrate how to fit learning curves with their model, using the data sets built into the Sci-Kit Learn library. I will also demonstrate some less-than-ideal fits using my own (lovely) data. Lastly I will discuss how it might be possible to detect statistically significant differences between models using the fit parameters.

Robert Clewley
Mailchimp, Google Summer of Code

Next-gen interactivity for data modeling

We are creating capabilities for modelers and data scientists to exploit diagnostics earlier in their model building dev cycle. Visualizations are not just for the end results! The process of model development needs more attention from visual diagnostic tools. We have a fledgeling GUI, extensions to Matplotlib, and some exciting examples that we want to push into Jupyter. We want more community input on these ideas. fovea

Nils Persson
Georgia Institute of Technology

Teaching a Computer to Read Science

People in all scientific disciplines spend an incredible amount of time reading and preparing published figures and graphs. Somewhere between 60-80% of all published scientific data is trapped in a .jpg file or a PDF somewhere on a publisher’s server. We live in an era of learning from big data, yet arguably the biggest, most validated, most scrutinized data collected by humankind is hiding in images. Automated extraction of data from images of figures and graphs has received a smattering of attention over the past decades, but the time has never been more ripe for this technology to come to the fore. With a dataset of hundreds of figures from open-access journals, how much information can we extract armed with just a Jupyter Notebook and computer vision libraries? The answer: quite a lot.

Eric Hein
Georgia Tech Research Institute

Visualizing Simulation Results with Jupyter

In my research, I often want to explore a design space, sweeping out several parameter values to see which one delivers the best performance across a suite of trials. Extracting insight from hundreds of simulator output files is time consuming and error-prone. Visualizing the results of many experiments at once is the fastest way to validate the results and get feedback from my research group. In this talk I will showcase my end-to-end solution for visualizing simulation results with Jupyter notebook. The pandas and seaborn libraries are used to interactively generate paper-ready plots that are automatically labeled with the correct parameter names from the source data. In this short talk, audience members with a basic understanding of plotting in Jupyter will learn how to streamline the process of extracting, labeling, and visualizing data from raw text files.

Up and Running with Jupyter

In this workshop, Carol Willing and Paco Nathan guide you through the Jupyter universe.

  • How do I get started with Jupyter?
  • How do I supercharge my notebooks?
  • How do I share my work with friends and colleagues?

Join this session to power up your notebook skills!

Bring Your Own Code to Jupyter

The Jupyter stack is built from the ground up to be extensible and hackable, but Jupyter is a big planet with many moons. Let’s learn about and build tools and approaches for getting started, testing, and distribution of Jupyter extensions. After a run-down of the extension points (sanctioned and otherwise) within the existing architecture, we’ll want to hear about existing extensions to the notebook server, the current notebook JS application, nbviewer, and JupyterHub… and start looking at next-generation tools like JupyterLab.

This workshop will organized by Jupyter developers, Steven Silvester and Nick Bollweg.

In [9]:
In [10]:
# from IPython.display import IFrame
# IFrame("",width='100%',height="900")


  • Is there a code of conduct?

The Code of Conduct can be found in our Github repository.

Thank you to our Sponsors

O'Reilly Media
Jupyter Project
General Assembly
Georgia Tech Research Institute
Continuum Anaytics
In [40]:
In [41]:
<div class="btn btn-default pull-right" id="toggle-source">Toggle the Source Code</div>
Toggle the Source Code
In [56]:
            $('.c4ode_cell .input').toggle();
    if (window.location.port != "8888"){
In [32]:
@import url("");
.speakers .rendered_html p {
    text-align: justify;