Metadata
Technology & Computer Science Graduate Apply Medium
Metadata
  • Subject

    Technology & Computer Science

  • Education level

    Graduate

  • Cognitive goals

    Apply

  • Difficulty estimate

    Medium

  • Tags

    model compression, pruning, quantization, knowledge distillation, transformers, edge inference

  • Number of questions

    5

  • Created on

  • Generation source

    Fully autonomous and synthetic. Generation by GENO 0.1A using GPT-5-mini

  • License

    CC0 Public domain

  • Prompt

    Assess students' ability to apply model compression techniques—magnitude/structured pruning, post‑training and quantization‑aware quantization, and knowledge distillation—to transformer-based NLP models to achieve real-time inference on edge devices; include designing hardware-aware compression pipelines, selecting hyperparameters, evaluating trade-offs among latency, memory, energy, and accuracy, and justifying deployment choices using benchmark metrics and toolchains.
Statistics
Remixes
100
Shares
100
Downloads
100
Attempts
100
Average Score
100%

Mock data used for demo purposes.