COMPSCI 296.2, Fall 2004

Experimental Methods in Computer Systems

CPS 296.2 Home

Tentative Schedule

Lecture Notes







Here are the conference proceeding assignments:
SOSP: Shobana
Mobisys: Angela
Ubicomp: Jam
ISCA: Nathan
Sensys: Rebecca
ASPLOS: Vijeta
FAST: Sita
NSDI: Badrish
USENIX Security: Siddhesh

  1. Wednesday, August 25: Make a list of the METRICS considered in your proceedings and how well they support the claims and questions the paper is trying to address (also note how easy it is to figure out what questions the experiments are trying to answer -- do the authors come right out and say what they are trying to evaluate or is the reader expected to dig it out from the results obtained?) This should NOT necessarily involve reading all the papers in detail to compile this info. It might work best for each of you to prepare one or two powerpoint slides summarizing what you learn from your survey of metrics in your chosen conference proceedings. Just put it in your public_html so we can get to it via the web. Pay close attention to the definition of the metrics. 3 papers could all use "latency" as a metric and mean very different things by it.
  2. Wednesday September 1: choose one paper and evaluate its experimental development from the point of view of Strong Inference as discussed in class and in Pratt's paper. Working in teams of 2 is OK. Prepare a short ppt presentation describing what you found.
  3. Wednesday September 8: Survey the types of workloads -- especially the standard benchmarks -- used in your proceedings (10 papers).
  4. Wednesday Sept 22: Term Project Pre-proposal. The goal of this assignment is to (1) briefly articulate the vague idea behind your term project (brief means < 1 ppt slide) and (2) sketch out "groping around" kind of experiments that will provide (2a) the data you would use to justify that you have an interesting problem and (2b) the data you would need to understand and model your idea well-enough to move toward the hypothesis stage. In case you have already done this preliminary step, then describe what you did.

    Recall what I mean by "groping around" experiments: they ask about the feasibility of an idea, try to identify where the "real" bottlenecks are, or determine basic parameter values (e.g. costs) for your model. These might be experiments you do but never expect that they will end up as "results" in a paper.

    Approx. 2 slides are expected. Groups of 2 are allowed/encouraged. Leveraging other course projects is also allowed.

  5. Mon. Oct. 4: Bring in one either notoriously bad or exceptionally good example of data presentation from your proceedings. The bad ones are more fun. Or if you find something just really different, please show it.
  6. Wed. Oct 20: Survey your proceedings for methods used (simulations of various types, emulation, measurement of prototypes, measurement of real deployments). Present several (3-5) of the most "interesting" ones.
  7. Wed. Oct 27: Project Proposal covering (a) hypothesis statement, (b) workload decisions, (c) metrics to be used, and (d) method (simulation, emulation, measurement of prototype).
  8. Wed. Nov. 10: Survey your proceedings for just one paper in which factorial design has been used or, if none, one in which it could have been used effectively. Talk about the factors and levels, replications (if any), interactions among factors, and the contributions found for each (if such results given).


Last updated August 22, 2004