Data Scientist Roadmap

Data ScienceMachine LearningDeep LearningPythonRTensorFlowPyTorchMLOps

Technology

Data Scientist Roadmap

Overview

Data scientists are specialists who extract valuable insights from large amounts of data and support business decision-making. In 2025, data science requires skills in Generative AI and MLOps as essential due to the surge in demand for AI-driven automation and personalized experiences. The ability to build end-to-end solutions from machine learning model development to production deployment is required.

Details

Phase 1: Building Foundation (3-6 months)

Programming Fundamentals

  • Complete Python Mastery

    • Data types and control structures
    • Functions and classes
    • Decorators and generators
    • Virtual environment management (venv, conda)
    • Jupyter Notebook/Lab
  • R Language (Statistical Analysis)

    • Data frame manipulation
    • Visualization with ggplot2
    • Data manipulation with dplyr
    • Statistical testing and inference

Mathematics and Statistics Foundation

  • Linear Algebra

    • Vector and matrix operations
    • Eigenvalues and eigenvectors
    • Singular Value Decomposition (SVD)
    • Principal Component Analysis (PCA)
  • Statistics

    • Descriptive and inferential statistics
    • Probability distributions
    • Hypothesis testing
    • Bayesian statistics
  • Calculus

    • Partial derivatives
    • Gradient descent
    • Optimization theory

Data Manipulation and SQL

  • Complete SQL Understanding

    • Complex JOIN operations
    • Window functions
    • CTEs and subqueries
    • Index optimization
  • Data Manipulation Libraries

    • pandas (dataframe manipulation)
    • NumPy (numerical computation)
    • Polars (high-speed data processing)

Phase 2: Machine Learning Fundamentals (6-12 months)

Machine Learning Algorithms

  • Supervised Learning

    • Linear and logistic regression
    • Decision trees and Random Forest
    • Gradient Boosting (XGBoost, LightGBM)
    • Support Vector Machines (SVM)
    • Neural network basics
  • Unsupervised Learning

    • K-means clustering
    • Hierarchical clustering
    • DBSCAN
    • Dimensionality reduction (PCA, t-SNE, UMAP)
  • scikit-learn Mastery

    • Pipeline construction
    • Cross-validation
    • Hyperparameter tuning
    • Model evaluation metrics

Data Visualization

  • Visualization Tools

    • Matplotlib (basic plots)
    • Seaborn (statistical visualization)
    • Plotly (interactive visualization)
    • Altair (declarative visualization)
  • Dashboard Creation

    • Streamlit
    • Dash
    • Gradio
    • Panel

Feature Engineering

  • Feature Creation

    • Numerical transformations
    • Categorical variable encoding
    • Time series features
    • Text features
  • Feature Selection

    • Correlation analysis
    • Mutual information
    • Permutation importance
    • SHAP values

Phase 3: Deep Learning and MLOps (12-18 months)

Deep Learning Frameworks

  • TensorFlow/Keras

    • Sequential API
    • Functional API
    • Custom layers and models
    • TensorFlow Serving
  • PyTorch

    • Tensor operations
    • Automatic differentiation
    • Custom datasets
    • PyTorch Lightning

Deep Learning Architectures

  • Computer Vision

    • CNN (Convolutional Neural Networks)
    • Transfer learning (ResNet, EfficientNet)
    • Object detection (YOLO, Faster R-CNN)
    • Segmentation
  • Natural Language Processing

    • RNN and LSTM
    • Transformer
    • BERT, GPT
    • Fine-tuning
  • Generative AI

    • VAE (Variational Autoencoder)
    • GAN (Generative Adversarial Networks)
    • Diffusion Models
    • LLM utilization

MLOps Practice

  • Experiment Management

    • MLflow
    • Weights & Biases
    • Neptune.ai
    • DVC (Data Version Control)
  • Model Deployment

    • Dockerization
    • Kubernetes
    • Model serving (TensorFlow Serving, TorchServe)
    • API development (FastAPI, Flask)
  • Monitoring

    • Data drift detection
    • Model performance monitoring
    • Evidently
    • WhyLabs

Phase 4: Advanced Data Science and Specialization (18-24 months)

Big Data Technologies

  • Distributed Processing

    • Apache Spark
    • Dask
    • Ray
    • PySpark
  • Data Pipelines

    • Apache Airflow
    • Prefect
    • Dagster
    • Apache Kafka

Cloud Platforms

  • AWS

    • SageMaker
    • EMR
    • Glue
    • Lambda
  • Google Cloud

    • Vertex AI
    • BigQuery
    • Dataflow
    • AI Platform
  • Azure

    • Azure Machine Learning
    • Databricks
    • Synapse Analytics

Specialization Areas

  • Time Series Forecasting

    • ARIMA, SARIMA
    • Prophet
    • LSTM for Time Series
    • State space models
  • Recommendation Systems

    • Collaborative filtering
    • Content-based filtering
    • Hybrid methods
    • Deep learning-based recommendations
  • Causal Inference

    • A/B test design
    • Propensity score matching
    • Difference-in-differences
    • Causal forests

Edge AI and On-device ML

  • Model Optimization

    • TensorFlow Lite
    • ONNX Runtime
    • Model quantization
    • Pruning
  • Edge Devices

    • NVIDIA Jetson
    • Google Coral
    • Intel Neural Compute Stick

Advantages and Disadvantages

Advantages

  • High demand and compensation: Data scientists are one of the most in-demand professions with high compensation expectations
  • Wide industry opportunities: Data science is needed across all industries including finance, healthcare, retail, and manufacturing
  • Interesting problem-solving: Creative work solving complex business challenges with data
  • Cutting-edge technology: Always working with the latest AI/ML technologies
  • Impact: Make significant organizational impact through data-driven decision making

Disadvantages

  • Continuous learning: Rapid technological advancement requires constant skill acquisition
  • Data quality challenges: Real-world data is messy, requiring significant time for preprocessing
  • Expectation management: Need to handle the gap between excessive AI expectations and reality
  • Accountability: Difficulty explaining model reasoning to non-technical stakeholders
  • Ethical challenges: Dealing with AI ethical issues like bias and privacy

Reference Pages

Code Examples

Basic Machine Learning Pipeline

# ml_pipeline.py
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split, cross_val_score, GridSearchCV
from sklearn.preprocessing import StandardScaler, LabelEncoder
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import classification_report, confusion_matrix, roc_auc_score
from sklearn.pipeline import Pipeline
import joblib
import matplotlib.pyplot as plt
import seaborn as sns

# Data loading and EDA
class DataProcessor:
    def __init__(self, filepath):
        self.data = pd.read_csv(filepath)
        self.X = None
        self.y = None
        
    def explore_data(self):
        """Basic data exploration"""
        print("Data shape:", self.data.shape)
        print("\nData types:")
        print(self.data.dtypes)
        print("\nMissing values:")
        print(self.data.isnull().sum())
        print("\nBasic statistics:")
        print(self.data.describe())
        
        # Correlation matrix visualization
        numeric_cols = self.data.select_dtypes(include=[np.number]).columns
        plt.figure(figsize=(10, 8))
        sns.heatmap(self.data[numeric_cols].corr(), annot=True, cmap='coolwarm')
        plt.title('Feature Correlation Matrix')
        plt.show()
        
    def preprocess(self, target_col):
        """Data preprocessing"""
        # Separate features and target
        self.X = self.data.drop(columns=[target_col])
        self.y = self.data[target_col]
        
        # Handle categorical variables
        categorical_cols = self.X.select_dtypes(include=['object']).columns
        for col in categorical_cols:
            le = LabelEncoder()
            self.X[col] = le.fit_transform(self.X[col].astype(str))
        
        # Handle missing values
        self.X = self.X.fillna(self.X.mean())
        
        return self.X, self.y

# Model training and evaluation
class MLPipeline:
    def __init__(self, X, y):
        self.X_train, self.X_test, self.y_train, self.y_test = train_test_split(
            X, y, test_size=0.2, random_state=42, stratify=y
        )
        self.pipeline = None
        self.best_model = None
        
    def create_pipeline(self):
        """Create pipeline"""
        self.pipeline = Pipeline([
            ('scaler', StandardScaler()),
            ('classifier', RandomForestClassifier(random_state=42))
        ])
        
    def hyperparameter_tuning(self):
        """Hyperparameter tuning"""
        param_grid = {
            'classifier__n_estimators': [100, 200, 300],
            'classifier__max_depth': [10, 20, None],
            'classifier__min_samples_split': [2, 5, 10],
            'classifier__min_samples_leaf': [1, 2, 4]
        }
        
        grid_search = GridSearchCV(
            self.pipeline,
            param_grid,
            cv=5,
            scoring='roc_auc',
            n_jobs=-1,
            verbose=1
        )
        
        grid_search.fit(self.X_train, self.y_train)
        self.best_model = grid_search.best_estimator_
        
        print("Best parameters:", grid_search.best_params_)
        print("Best score:", grid_search.best_score_)
        
    def evaluate_model(self):
        """Model evaluation"""
        # Predictions
        y_pred = self.best_model.predict(self.X_test)
        y_pred_proba = self.best_model.predict_proba(self.X_test)[:, 1]
        
        # Evaluation metrics
        print("\nClassification Report:")
        print(classification_report(self.y_test, y_pred))
        
        print("\nROC-AUC Score:", roc_auc_score(self.y_test, y_pred_proba))
        
        # Confusion matrix
        cm = confusion_matrix(self.y_test, y_pred)
        plt.figure(figsize=(8, 6))
        sns.heatmap(cm, annot=True, fmt='d', cmap='Blues')
        plt.title('Confusion Matrix')
        plt.ylabel('True Label')
        plt.xlabel('Predicted Label')
        plt.show()
        
        # Feature importance
        feature_importance = self.best_model.named_steps['classifier'].feature_importances_
        feature_names = self.X_train.columns
        
        importance_df = pd.DataFrame({
            'feature': feature_names,
            'importance': feature_importance
        }).sort_values('importance', ascending=False)
        
        plt.figure(figsize=(10, 8))
        sns.barplot(data=importance_df.head(15), x='importance', y='feature')
        plt.title('Top 15 Feature Importances')
        plt.show()
        
    def save_model(self, filepath):
        """Save model"""
        joblib.dump(self.best_model, filepath)
        print(f"Model saved to {filepath}")

# Usage example
if __name__ == "__main__":
    # Data processing
    processor = DataProcessor('data.csv')
    processor.explore_data()
    X, y = processor.preprocess('target')
    
    # Model training
    ml_pipeline = MLPipeline(X, y)
    ml_pipeline.create_pipeline()
    ml_pipeline.hyperparameter_tuning()
    ml_pipeline.evaluate_model()
    ml_pipeline.save_model('best_model.pkl')

Deep Learning for Image Classification

# deep_learning_cnn.py
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split

class ImageClassifier:
    def __init__(self, input_shape, num_classes):
        self.input_shape = input_shape
        self.num_classes = num_classes
        self.model = None
        self.history = None
        
    def build_cnn_model(self):
        """Build CNN model"""
        self.model = keras.Sequential([
            # Convolutional block 1
            layers.Conv2D(32, (3, 3), activation='relu', input_shape=self.input_shape),
            layers.BatchNormalization(),
            layers.MaxPooling2D((2, 2)),
            layers.Dropout(0.25),
            
            # Convolutional block 2
            layers.Conv2D(64, (3, 3), activation='relu'),
            layers.BatchNormalization(),
            layers.MaxPooling2D((2, 2)),
            layers.Dropout(0.25),
            
            # Convolutional block 3
            layers.Conv2D(128, (3, 3), activation='relu'),
            layers.BatchNormalization(),
            layers.MaxPooling2D((2, 2)),
            layers.Dropout(0.25),
            
            # Fully connected layers
            layers.Flatten(),
            layers.Dense(256, activation='relu'),
            layers.BatchNormalization(),
            layers.Dropout(0.5),
            layers.Dense(self.num_classes, activation='softmax')
        ])
        
        # Compile model
        self.model.compile(
            optimizer=keras.optimizers.Adam(learning_rate=0.001),
            loss='categorical_crossentropy',
            metrics=['accuracy', keras.metrics.TopKCategoricalAccuracy(k=3)]
        )
        
        self.model.summary()
        
    def create_data_augmentation(self):
        """Set up data augmentation"""
        data_augmentation = keras.Sequential([
            layers.RandomFlip("horizontal"),
            layers.RandomRotation(0.1),
            layers.RandomZoom(0.1),
            layers.RandomBrightness(0.1),
            layers.RandomContrast(0.1),
        ])
        return data_augmentation
        
    def train(self, X_train, y_train, X_val, y_val, epochs=50):
        """Train model"""
        # Callback setup
        callbacks = [
            keras.callbacks.EarlyStopping(
                monitor='val_loss',
                patience=10,
                restore_best_weights=True
            ),
            keras.callbacks.ReduceLROnPlateau(
                monitor='val_loss',
                factor=0.5,
                patience=5,
                min_lr=1e-7
            ),
            keras.callbacks.ModelCheckpoint(
                'best_model.h5',
                monitor='val_accuracy',
                save_best_only=True
            )
        ]
        
        # Apply data augmentation
        data_augmentation = self.create_data_augmentation()
        
        # Training
        self.history = self.model.fit(
            X_train, y_train,
            batch_size=32,
            epochs=epochs,
            validation_data=(X_val, y_val),
            callbacks=callbacks,
            verbose=1
        )
        
    def plot_training_history(self):
        """Visualize training history"""
        fig, axes = plt.subplots(1, 2, figsize=(15, 5))
        
        # Loss plot
        axes[0].plot(self.history.history['loss'], label='Training Loss')
        axes[0].plot(self.history.history['val_loss'], label='Validation Loss')
        axes[0].set_title('Model Loss')
        axes[0].set_xlabel('Epoch')
        axes[0].set_ylabel('Loss')
        axes[0].legend()
        axes[0].grid(True)
        
        # Accuracy plot
        axes[1].plot(self.history.history['accuracy'], label='Training Accuracy')
        axes[1].plot(self.history.history['val_accuracy'], label='Validation Accuracy')
        axes[1].set_title('Model Accuracy')
        axes[1].set_xlabel('Epoch')
        axes[1].set_ylabel('Accuracy')
        axes[1].legend()
        axes[1].grid(True)
        
        plt.tight_layout()
        plt.show()
        
    def evaluate(self, X_test, y_test):
        """Evaluate model"""
        test_loss, test_accuracy, test_top3_accuracy = self.model.evaluate(
            X_test, y_test, verbose=0
        )
        
        print(f"\nTest Loss: {test_loss:.4f}")
        print(f"Test Accuracy: {test_accuracy:.4f}")
        print(f"Top-3 Accuracy: {test_top3_accuracy:.4f}")
        
        # Display sample predictions
        predictions = self.model.predict(X_test[:10])
        
        fig, axes = plt.subplots(2, 5, figsize=(15, 6))
        axes = axes.ravel()
        
        for i in range(10):
            axes[i].imshow(X_test[i])
            axes[i].set_title(f"Pred: {np.argmax(predictions[i])}, True: {np.argmax(y_test[i])}")
            axes[i].axis('off')
            
        plt.tight_layout()
        plt.show()

# Transfer learning implementation
class TransferLearningClassifier:
    def __init__(self, input_shape, num_classes):
        self.input_shape = input_shape
        self.num_classes = num_classes
        self.model = None
        
    def build_transfer_model(self, base_model_name='EfficientNetB0'):
        """Build transfer learning model"""
        # Load base model
        if base_model_name == 'EfficientNetB0':
            base_model = keras.applications.EfficientNetB0(
                input_shape=self.input_shape,
                include_top=False,
                weights='imagenet'
            )
        elif base_model_name == 'ResNet50':
            base_model = keras.applications.ResNet50(
                input_shape=self.input_shape,
                include_top=False,
                weights='imagenet'
            )
        
        # Freeze base model
        base_model.trainable = False
        
        # Add custom head
        inputs = keras.Input(shape=self.input_shape)
        x = base_model(inputs, training=False)
        x = layers.GlobalAveragePooling2D()(x)
        x = layers.Dense(256, activation='relu')(x)
        x = layers.BatchNormalization()(x)
        x = layers.Dropout(0.5)(x)
        outputs = layers.Dense(self.num_classes, activation='softmax')(x)
        
        self.model = keras.Model(inputs, outputs)
        
        # Compile
        self.model.compile(
            optimizer=keras.optimizers.Adam(learning_rate=0.001),
            loss='categorical_crossentropy',
            metrics=['accuracy']
        )
        
    def fine_tune(self, base_model_layers_to_unfreeze=20):
        """Fine-tuning"""
        # Unfreeze part of base model
        base_model = self.model.layers[1]
        base_model.trainable = True
        
        # Make only last N layers trainable
        for layer in base_model.layers[:-base_model_layers_to_unfreeze]:
            layer.trainable = False
            
        # Recompile with lower learning rate
        self.model.compile(
            optimizer=keras.optimizers.Adam(learning_rate=0.0001),
            loss='categorical_crossentropy',
            metrics=['accuracy']
        )

MLOps Pipeline

# mlops_pipeline.py
import mlflow
import mlflow.sklearn
import mlflow.tensorflow
from mlflow.tracking import MlflowClient
import pandas as pd
import numpy as np
from datetime import datetime
import json
import yaml

class MLOpsWorkflow:
    def __init__(self, experiment_name):
        self.experiment_name = experiment_name
        mlflow.set_experiment(experiment_name)
        self.client = MlflowClient()
        
    def log_dataset_info(self, X_train, X_test, y_train, y_test):
        """Log dataset information"""
        dataset_info = {
            'train_samples': len(X_train),
            'test_samples': len(X_test),
            'features': X_train.shape[1],
            'target_distribution_train': dict(pd.Series(y_train).value_counts()),
            'target_distribution_test': dict(pd.Series(y_test).value_counts())
        }
        
        mlflow.log_params(dataset_info)
        
    def train_and_log_model(self, model, X_train, y_train, X_test, y_test, model_name):
        """Train and log model"""
        with mlflow.start_run(run_name=f"{model_name}_{datetime.now().strftime('%Y%m%d_%H%M%S')}"):
            # Log dataset info
            self.log_dataset_info(X_train, X_test, y_train, y_test)
            
            # Log model hyperparameters
            mlflow.log_params(model.get_params())
            
            # Train model
            model.fit(X_train, y_train)
            
            # Predictions and evaluation
            y_pred = model.predict(X_test)
            y_pred_proba = model.predict_proba(X_test)[:, 1] if hasattr(model, 'predict_proba') else None
            
            # Calculate and log metrics
            from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score, roc_auc_score
            
            metrics = {
                'accuracy': accuracy_score(y_test, y_pred),
                'precision': precision_score(y_test, y_pred, average='weighted'),
                'recall': recall_score(y_test, y_pred, average='weighted'),
                'f1_score': f1_score(y_test, y_pred, average='weighted')
            }
            
            if y_pred_proba is not None:
                metrics['roc_auc'] = roc_auc_score(y_test, y_pred_proba)
                
            mlflow.log_metrics(metrics)
            
            # Save model
            mlflow.sklearn.log_model(
                model, 
                model_name,
                registered_model_name=model_name,
                input_example=X_train[:5]
            )
            
            # Save feature importances (for tree-based models)
            if hasattr(model, 'feature_importances_'):
                feature_importance = pd.DataFrame({
                    'feature': X_train.columns,
                    'importance': model.feature_importances_
                }).sort_values('importance', ascending=False)
                
                # Visualize feature importances
                import matplotlib.pyplot as plt
                fig, ax = plt.subplots(figsize=(10, 8))
                feature_importance.head(20).plot.barh(x='feature', y='importance', ax=ax)
                plt.title('Feature Importances')
                plt.tight_layout()
                mlflow.log_figure(fig, 'feature_importances.png')
                plt.close()
                
            return model, metrics

# Model monitoring
class ModelMonitor:
    def __init__(self, model, reference_data):
        self.model = model
        self.reference_data = reference_data
        self.monitoring_results = []
        
    def detect_data_drift(self, new_data):
        """Detect data drift"""
        from scipy import stats
        
        drift_results = {}
        
        for column in self.reference_data.columns:
            if self.reference_data[column].dtype in ['float64', 'int64']:
                # For numerical data: KS test
                statistic, p_value = stats.ks_2samp(
                    self.reference_data[column],
                    new_data[column]
                )
                drift_results[column] = {
                    'test': 'ks_test',
                    'statistic': statistic,
                    'p_value': p_value,
                    'drift_detected': p_value < 0.05
                }
            else:
                # For categorical data: Chi-square test
                ref_dist = self.reference_data[column].value_counts(normalize=True)
                new_dist = new_data[column].value_counts(normalize=True)
                
                # Ensure distribution consistency
                all_categories = set(ref_dist.index) | set(new_dist.index)
                ref_dist = ref_dist.reindex(all_categories, fill_value=0)
                new_dist = new_dist.reindex(all_categories, fill_value=0)
                
                statistic, p_value = stats.chisquare(new_dist, ref_dist)
                drift_results[column] = {
                    'test': 'chi_square',
                    'statistic': statistic,
                    'p_value': p_value,
                    'drift_detected': p_value < 0.05
                }
                
        return drift_results
        
    def monitor_model_performance(self, new_data, new_labels):
        """Monitor model performance"""
        predictions = self.model.predict(new_data)
        
        from sklearn.metrics import accuracy_score, precision_score, recall_score
        
        performance = {
            'timestamp': datetime.now(),
            'accuracy': accuracy_score(new_labels, predictions),
            'precision': precision_score(new_labels, predictions, average='weighted'),
            'recall': recall_score(new_labels, predictions, average='weighted'),
            'sample_size': len(new_labels)
        }
        
        self.monitoring_results.append(performance)
        
        # Detect performance degradation
        if len(self.monitoring_results) > 1:
            recent_accuracy = np.mean([r['accuracy'] for r in self.monitoring_results[-5:]])
            baseline_accuracy = self.monitoring_results[0]['accuracy']
            
            if recent_accuracy < baseline_accuracy * 0.95:  # 5% or more degradation
                print(f"Warning: Model performance has degraded. "
                      f"Baseline: {baseline_accuracy:.4f}, "
                      f"Current: {recent_accuracy:.4f}")
                
        return performance

# Production deployment
class ModelDeployment:
    def __init__(self, model_name, model_version):
        self.model_name = model_name
        self.model_version = model_version
        self.client = MlflowClient()
        
    def load_production_model(self):
        """Load production model"""
        model_uri = f"models:/{self.model_name}/{self.model_version}"
        model = mlflow.sklearn.load_model(model_uri)
        return model
        
    def create_prediction_service(self):
        """Create prediction service"""
        from fastapi import FastAPI, HTTPException
        from pydantic import BaseModel
        import uvicorn
        
        app = FastAPI()
        model = self.load_production_model()
        
        class PredictionRequest(BaseModel):
            features: list
            
        class PredictionResponse(BaseModel):
            prediction: float
            probability: list
            model_version: str
            
        @app.post("/predict", response_model=PredictionResponse)
        async def predict(request: PredictionRequest):
            try:
                # Convert input data
                input_data = np.array(request.features).reshape(1, -1)
                
                # Predict
                prediction = model.predict(input_data)[0]
                probability = model.predict_proba(input_data)[0].tolist()
                
                return PredictionResponse(
                    prediction=float(prediction),
                    probability=probability,
                    model_version=self.model_version
                )
            except Exception as e:
                raise HTTPException(status_code=400, detail=str(e))
                
        return app
        
# Usage example
if __name__ == "__main__":
    # Initialize MLOps workflow
    mlops = MLOpsWorkflow("customer_churn_prediction")
    
    # Prepare data (synthetic data)
    from sklearn.datasets import make_classification
    X, y = make_classification(n_samples=1000, n_features=20, n_informative=15)
    X = pd.DataFrame(X, columns=[f'feature_{i}' for i in range(20)])
    
    from sklearn.model_selection import train_test_split
    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
    
    # Train and log model
    from sklearn.ensemble import RandomForestClassifier
    model = RandomForestClassifier(n_estimators=100, random_state=42)
    
    trained_model, metrics = mlops.train_and_log_model(
        model, X_train, y_train, X_test, y_test, "random_forest_v1"
    )
    
    print("Model metrics:", metrics)