from nbconvert import export_html
def chill():
with open('index.html', 'w') as f: f.write("""---\n---\n"""+export_html('JupyterDay Atlanta.ipynb')[0])
Joshua L. Davis is a Senior Research Scientist and Division Head of a HPC/Data Analytics at the Georgia Tech Research Institute. Mr. Davis has 10 years of experience in Department of Defense software and is the co-founder of the Military Open Source Software community. Holds an MBA and a B.S. in Computer Science. Additionally a Certified Information Systems Security Professional (CISSP) and Certified Ethical Hacker (C/EH).
Carol Willing is a Director of the Python Software Foundation, a core developer for Project Jupyter, and a Research Software Engineer at Cal Poly San Luis Obispo.
She's also Geek-In-Residence at Fab Lab San Diego and co-organizes PyLadies San Diego and San Diego Python. She’s an active contributor to open source projects, as a maintainer for OpenHatch and the Anita Borg Institute’s open source projects. Combining a love of nature, the arts, and math with a BSE in Electrical Engineering from Duke and an MS in Management from MIT, she’s enjoyed creating and teaching others for over 20 years.
She recently spoke at Grace Hopper Celebration 2015, PyCon 2015, PyCon Philippines 2016, Write/Speak/Code 2016, and SciPy 2016.
Dr. Robert Clewley is a polymath scientist and educator, specializing in computational science and mathematical modeling. He has published academic articles about the modeling of epilepsy, cancer, cardiology, and biomechanics. His research has been supported by federal grants from NSF and the Army Research Laboratory. Dr. Clewley also develops the open source PyDSTool modeling software that is used internationally in many scientific and engineering fields.
chill()
In no particular order.
ADCIRC is an ocean circulation model with storm surge and inundation byproducts. The model has been run hundreds of times to compile an estimation matrix. In a Jupyter Dashboard application, a map of storm surge from a typical tropical storm is displayed. Using ipwidgets, sliders on the dashboard allow emergency managers to vary the properties of the storm such as wind velocity and landfall location to update the map with likely storm surge computed from the estimation matrix. This dashboard is the first use of the recently released ipympl, which treats matplotlib figures as an ipwidget.
The audience should get a tour of the latest features of JupyterLab. The audience should learn about the intended path of JupyterLab development and how third party extensions will be added. There is no knowledge requirement, but experience with the existing Jupyter Notebook would be beneficial. The preferred duration would be 25 minutes, plus 5 minutes for questions.
JupyterHub brings the power of Jupyter notebooks to a group of users. University classes, workshops, and user groups can all benefit from notebooks being hosted on a common server. Configuring a JupyterHub server can be complex with many details. This presentation will give the audience an overview of JupyterHub’s architecture and how to install and configure JupyterHub. Inspired by Randall Munroe’s (“xkcd”) Thing Explainer, we’ll break down JupyterHub’s complexity into simple understandable concepts. Whether you are new to JupyterHub or have some experience with it, you will leave this talk empowered to give JupyterHub a try.
Imaginary and complex numbers are perhaps the most underrated mathematical discovery ever. Functions of these variables are beautiful, elegant, and super useful in mathematics and science. However, as inherently four-dimensional objects, functions of complex variables are notoriously difficult to understand. With the help of the Jupyter Notebook, we’ll visualize our way through the deep and profound world of complex functions and Riemann Surfaces. This talk accompanies the final parts of the YouTube series Imaginary Numbers Are Real.
O’Reilly Media needed to provide a way for authors to use Jupyter notebooks to create professional publications. We also wanted to integrate video narration in the UX. The result is a unique new learning medium called Oriole, where Jupyter notebooks are used in the middleware, each viewer gets a 100% HTML experience (no download/install needed), the code and video are sync’ed together, and each web session has a Docker container running in the cloud. Tutorials are much quicker to publish than “traditional” books and video. This talk will show examples plus examine the system architecture, built from open source projects. We’ll review feedback from authors working in this medium, i.e., how to teach more effectively through video + notebooks + containers.
Scientists that study machine learning often plot the error of a model against the amount of data used to train that model. Such plots are known as learning curves or validation curves. In 1994, Cortes et al. proposed a method for fitting these curves with an exponential decay function. Their method provides a way to predict how different models stack up against each other. Importantly, it can avoid the computationally expensive process of estimating error for large training sets. With help from a Jupyter notebook, I will introduce exponential decay functions and give a brief derivation of Cortes et al.’s method. Then I will demonstrate how to fit learning curves with their model, using the data sets built into the Sci-Kit Learn library. I will also demonstrate some less-than-ideal fits using my own (lovely) data. Lastly I will discuss how it might be possible to detect statistically significant differences between models using the fit parameters.
We are creating capabilities for modelers and data scientists to exploit diagnostics earlier in their model building dev cycle. Visualizations are not just for the end results! The process of model development needs more attention from visual diagnostic tools. We have a fledgeling GUI, extensions to Matplotlib, and some exciting examples that we want to push into Jupyter. We want more community input on these ideas. fovea
People in all scientific disciplines spend an incredible amount of time reading and preparing published figures and graphs. Somewhere between 60-80% of all published scientific data is trapped in a .jpg file or a PDF somewhere on a publisher’s server. We live in an era of learning from big data, yet arguably the biggest, most validated, most scrutinized data collected by humankind is hiding in images. Automated extraction of data from images of figures and graphs has received a smattering of attention over the past decades, but the time has never been more ripe for this technology to come to the fore. With a dataset of hundreds of figures from open-access journals, how much information can we extract armed with just a Jupyter Notebook and computer vision libraries? The answer: quite a lot.
In my research, I often want to explore a design space, sweeping out several parameter values to see which one delivers the best performance across a suite of trials. Extracting insight from hundreds of simulator output files is time consuming and error-prone. Visualizing the results of many experiments at once is the fastest way to validate the results and get feedback from my research group. In this talk I will showcase my end-to-end solution for visualizing simulation results with Jupyter notebook. The pandas and seaborn libraries are used to interactively generate paper-ready plots that are automatically labeled with the correct parameter names from the source data. In this short talk, audience members with a basic understanding of plotting in Jupyter will learn how to streamline the process of extracting, labeling, and visualizing data from raw text files.
In this workshop, Carol Willing and Paco Nathan guide you through the Jupyter universe.
Join this session to power up your notebook skills!
The Jupyter stack is built from the ground up to be extensible and hackable, but Jupyter is a big planet with many moons. Let’s learn about and build tools and approaches for getting started, testing, and distribution of Jupyter extensions. After a run-down of the extension points (sanctioned and otherwise) within the existing architecture, we’ll want to hear about existing extensions to the notebook server, the current notebook JS application, nbviewer, and JupyterHub… and start looking at next-generation tools like JupyterLab.
This workshop will organized by Jupyter developers, Steven Silvester and Nick Bollweg.
chill()
# from IPython.display import IFrame
# IFrame("https://docs.google.com/forms/d/14Kq65Xt8tWrUSMZF4vpnIYR96Vgs-7uLludFPn87EFU/viewform",width='100%',height="900")
chill()
%%html
<div class="btn btn-default pull-right" id="toggle-source">Toggle the Source Code</div>
%%javascript
$(document).ready(function(){
$('#toggle-source').click(function(){
$('.c4ode_cell .input').toggle();
});
if (window.location.port != "8888"){
$('#toggle-source').click()
};
})
%%html
<style>
@import url("https://maxcdn.bootstrapcdn.com/font-awesome/4.6.3/css/font-awesome.min.css");
.speakers .rendered_html p {
text-align: justify;
}
</style>