NuMat Technologies is a team of chemists, chemical engineers, and computer scientists developing advanced materials to remove toxic chemicals and greenhouse gases from air, water, and more.
Founded by computer scientists and working with a new material class that lends itself well to computational design, NuMat puts computation at the forefront of the business. Whether this is developing automated material design "recommendation engines", building robotics for high-throughput experimentation, or maintaining our in-house enterprise resource planning applications, NuMat's computational team touches every aspect of the company.
Expect to see async and advanced communication protocols in robotics, Django+SQL with our ERP applications, and HPC management software like Dask and Jupyter in our computational material design.
At JFrog, we are making endless software versions a thing of the past, with liquid software that flows continuously and automatically from build all the way to deployment. With this in mind, we’ve developed the world’s first universal artifact management platform, ushering in a new era in DevOps – Continuous Updates. Ten years later, with thousands of customers, and millions of users globally, JFrog has become the “Database of DevOps” and de-facto standard in release and update management.
JFrog embraces the Python language for multiple uses cases and technology solutions including provisioning machines, tooling for Pipelines, creating machine learning models, securing Python modules, and even Python-based micro services in the JFrog Platform.
Zoro is an online distributor of products for B2B customers, focused on helping small businesses easily find what they need to grow and maintain their businesses. Today, we have over eight million products available—and that number is expected to keep growing. We work with third-party suppliers to provide products and fulfill orders for our customers.
Zoro uses Python with Django for its ecomerce site, as well as for data science, ETLs, and microservices.
Narrative Science is a data storytelling company that has been dynamically writing stories and reports for over a decade
This talk will go over how we are using Python and its rich ecosystem to move towards a microservices architecture that will create a more scalable and fault tolerant product.
Easy to build Python Dashboards using Financial data APIs
Automated tests are a great way to iterate fast and ensure features didn't break. This talk discusses how to speed up your builds and dev cycle even more by running tests asynchronously using a pytest plugin called asyncio-cooperative.
In Today’s world, AI has become an essential tool for achieving and creating the unthinkable. It is helping in creating innovative solutions for almost every industry there is. In the wake of this ever-growing demand for computerized intelligence, what constitutes an active research domain is how AI-based intelligence can be interpreted and utilized by HR (Human Resources) from predictive analysis to automation. As the HR department is solely responsible for recruiting and bringing valuable talent to the industry, it becomes essential that this task is done with maximum efficiency. Through this project, we intend to predict which employee would prefer a job change and which employee would stay in a company and help assess the input resources required to put in an employee. This presentation will take you through the principles of using python, opinion mining, and various widely used classifiers, namely Random Forest (RF), Cat Boost Classifier, Support Vector Machine (SVM), and Naïve Bayes (NB).
Building machine learning (ML) models is faster and easier now than ever before. The proliferation of open-source libraries means data scientists can leverage cutting-edge pre-trained models in just a few lines of code. Yet it remains true that most ML models never make it to production. Why? Because making it to production (and staying in production) are about more than just model and code quality. In particular, this talk will discuss how MLOps can greatly accelerate and increase the chances of model success.
Specifically, the talk will walk through the full ML lifecycle and answer: What is MLOps? Why is it important? How can MLOps infrastructure be set up quickly, easily, and with open source tools? How can the system be designed in a user-friendly way, but without too much magic? How can user adoption be accelerated?
While its expected that data-science-related professionals will garner the most value from this talk, no prior MLOps/ML background is required to understand the contents of the talk.
Test data for automated tests can be a nightmare to manage. Data must be prepped in advance, loaded before testing, and cleaned up afterwards. Sometimes, teams don't have much control over the data in their systems under test—it's just dropped in, and it can change arbitrarily. Hard-coding values into tests that reference system tests can make the tests brittle, especially when running tests in different environments. In this talk, I'll teach strategies for managing each type of test data: test case variations, test control inputs, config metadata, and product state. We will cover how to "discover" test data instead of hard-coding it, how to pass inputs into automation (including secrets like passwords), and how to manage data in the system. After this talk, you will wake up from the nightmare and handle test data cleanly and efficiently like a pro!