In order to explain the problem I had that prompted my interest in Python (and led me to become a Chipy mentee), I'm going to briefly summarize the history of the electronics industry, beginning with the invention of the light bulb in 1880, to the triode vacuum tube in 1906, to early tube-based digital computers, to the invention of the transistor, to the development of the integrated circuit, to the microprocessor, and finally to the present day. Then I will explain that I use vacuum tubes in modern audio products, and will explain how I overcame a common problem with tubes using Python and accomplished a level of precision that my competitors said was not possible. Unlike most Chipy presentations, mine will focus less on my project and more on the history, with extensive images, video clips, and diagrams of a century of development in the field of electronics and computers. I think I can make it fun and interesting for all.
Unicorn Markets has come out with another game. The quality and purpose of this game is a level beyond that of PyWeek 23. I will talk about the development process of working on a team in a short 1 week span, the actual game, the learning experience about the design and architecture of code. And if there is nothing left to discuss, we will just play and investigate video games from the contest. If you want to play the game before just run `>git clone https://github.com/UnicornMarkets/Nightmarotony.git' cd into the directory and run: '>python run_game.py' you will need pygames and python, 2 or 3 should work. And because I forgot to remove the dependency, Python Image Library (PIL) - though it is not necessary for the game.
A brief overview of what SQL indexes are, why you need them, and how to add them (using SQL or Django code).
Investments in AI are heating up, with the total market estimated as high as $126B by 2025. This talk will present case studies and code samples of how our clients are using Python today and how we expect this to evolve over the next few years as AI becomes increasingly ubiquitous. Python enables each phase of the AI pipeline: DevOps, Data Engineering, Model Development, Deep Learning, Cognitive User Interfaces, and Microservices. This talk will highlight how Python is a common glue across multiple disciplines that will allow cross functional teams work together to get real results from AI.
Containerization technologies such as Docker enable software to run across various computing environments. Data Science requires auditable workflows where we can easily share and reproduce results. Docker is a useful tool that we can use to package libraries, code, and data into a single image. This talk will cover the basics of Docker; discuss how containers fit into Data Science workflows; and provide a quick-start guide that can be used as a template to create a shareable Docker image! Learn how to leverage the power of Docker without having to worry about the underlying details of the technology. Although this session is geared towards data scientists, the underlying concepts have many use cases (come find me after to discuss).
The 2017 hurricane season is proving to be one of the strongest in history, and predictive modeling plays an important role in evacuation and mitigation planning. Coastal communities in the path of hurricanes face several major hazards - strong winds, heavy rainfall, relentless waves, and storm surge. Storm surge is a type of transient sea level rise where water is forced towards the shore by winds, and the right conditions can produce very high levels - Hurricane Harvey raised Galveston Bay by upwards of ten feet, and in 2012 Hurricane Sandy produced 12-foot surge in Lower Manhattan. I'll discuss the current state of storm surge modeling with focus on an open-source package called GeoClaw, developed by academic researchers across the U.S. GeoClaw uses Python and Fortran to run a dynamic simulation of coastal flooding using storm and topography datasets, and thanks to some novel dimensionality reduction it can be run on a laptop.
In an extended version of the lightning talk I gave for the spring ChiPy mentorship final presentations, I will go into more depth about how I collected and processed bus location data from the CTA's bus tracker API. I will also discuss interesting discoveries I made once I plotted the data, work I have done on the project since completing the mentorship (collecting data from 30 additional bus routes, converting visualizations from Bokeh/Python to D3.js, analyzing and visualizing bus bunching, etc), as well as future plans for the project.
A continued narrative of the tale of two snakes. In this talk, we will discuss some of the most impressive features of Anaconda, including built in binaries, command line interface, the history of the distribution, and why it is the right choice for just about every Python stack. This talk does not assume audience familiarity with the distribution. We will take advantage of the *better* batteries included nature of this distribution to step through beginner and intermediate concepts. I intend for the audience to feel comfortable and excited to give this a try on their own.