I get it; you’re busy. Even if you had the time to read every tutorial and watch every video available, you probably missed something. Data is a vast subject. As this year winds down, I thought it might be a good idea to bring your attention to the content you might have missed. I’ve divided data into two main topics, data management, and data analytics. Hopefully, this makes it easier for you to find what pertains to you.
If you’re into machine learning, data science, and analytics, this is the post for you. If you’re into the database, connecting to data, storing data and more, I’ll be publishing a data management post and wrap-up soon.
Want to make sure you don’t miss anything in the future? Subscribe to a developerWorks newsletter. The newsletters publish monthly and contain all the latest and greatest content published on developerWorks.
Data analytics tutorials you might have missed
Using data science to manage a software project in a Github organization
Create a data science skeleton project then explore it with Jupyter Notebooks and deploy it to the Python Package index in this two-part tutorial series.
Social power, influence, and performance in the NBA
In this tutorial series, learn the basics of data science and machine learning to explore the valuation of around the teams and individual players in the NBA by predicting social influence using Python and Pandas and machine learning (And a touch of R).
Develop an IBM i2 Analyze data access on-demand connector
i2 Analyze allows you to make smart, informed business decisions by providing visual analysis tools to gain insight from your data. In this tutorial, learn to develop i2 connectors so you can enrich your investigation with information from third-party data sources.
In this tutorial, take the complete journey of acquiring data, curating and cleansing the data, analyzing and visualizing the data, and enriching the data to drive value.
Extract insights from social media posts with Watson and Spark in Data Science Experience
In this tutorial, you will go through the complete journey of acquiring data, curating and cleansing the data, analyzing and visualizing the data, and enriching the data to drive value.
Developing cognitive IoT solutions for anomaly detection by using deep learning
In this 4-part tutorial series, Romeo Kienzler delves into anomaly detection with DeepLearning to provide you with a sufficient understanding of neural networks and what applying deep learning concepts to your data can do for your IoT data in your cognitive system.
Integrate your data with the Hyperledger Fabric blockchain
But data integration doesn’t have to be complicated! The Hyperledger Composer application development toolset provides easy integration for your data (such assets and participants that you create and update, and transactions that you execute) for the business network that you work with (such as a commodities trading, property, or supply chain network). Hyperledger Composer and REST APIs make data integration easy!
Data analytics videos you might have missed
The developerWorks TV crew has worked hard this year creating many videos. Not surprisingly, there are a lot of videos about data science (it’s a hot topic).
developerWorks Live features live coding events, webinars, and “ask me anything” for developers who want to improve their skills and stay on top of their industry. There are many videos to interest the data scientist, including:
- Create a DeepLearning based anomaly detector using DeepLearning4J and Apache SystemML on ApacheSpark
- Gaming the gamer: Using data science to up your game
- Kaggle analysis with Spark ML
- Get your analytics on with Blockchain
Discovering Data Science
Romeo Kienzler is the Chief Data Scientist for IBM Watson IoT. He is passionate about solving the hardest data science challenges as a client advocate of the IBM Academy of Technology. Follow Romeo in the Discovering Data Science video series.
Analytics for file and object data in place
Chris Thomas discusses variety, volume, and velocity – the three challenges to data analytics when it comes to analyzing large sets of data.