CS 451/651 Spring 2015 Paper Questions

For each paper, your assignment is two-fold. By 10PM the evening before lecture:

You can also upload your questions and answers using gsubmit. Please be sure to follow the naming convention. Answers should be named lec[N].txt where N is the lecture number from the schedule and questions named sq[N].txt similarly.


## Answer goes into lecN.txt eg for lecture 2
$ gsubmit cs451 lec2.txt
## Question goes into sqN.txt eg for lecture 2
$ gsubmit cs451 sq2.txt

Hypervisor-based Fault-tolerance Suppose that instead of connecting both the primary and backup to the same disk, we connected them to separate disks with identical copies of the data? Would this work? What else might we have to worry about, and what are some things that could go wrong?

Describe a way to tackle data skew in a MapReduce job. Does your solution help you alleviate the problem of computation skew as well? If yes, explain why. If no, how would you tackle computation skew?

Describe a sequence of events that result in a client reading stale data from the Google File System

How does VM FT handle network partitions? That is, is it possible that if the primary and the backup end up in different network partitions that the backup will become a primary too and the system will run with two primaries?

Flat Datacenter Storage Suppose tractserver T1 is temporarily unreachable due to a network problem, so the metadata server drops T1 from the TLT. Then the network problem goes away, but for a while the metadata server is not aware that T1's status has changed. During this time could T1 serve client requests to read and write tracts that it stores? If yes, give an example of how this could happen. If no, explain what mechanism(s) prevent this from happening.

Paxos Made Simple Suppose that the acceptors are A, B, and C. A and B are also proposers. How does Paxos ensure that the following sequence of events can't happen? What actually happens, and which value is ultimately chosen?

  1. A sends prepare requests with proposal number 1, and gets responses from A, B, and C.
  2. A sends accept(1, "foo") to A and C and gets responses from both. Because a majority accepted, A thinks that "foo" has been chosen. However, A crashes before sending an accept to B.
  3. B sends prepare messages with proposal number 2, and gets responses from B and C.
  4. B sends accept(2, "bar") messages to B and C and gets responses from both, so B thinks that "bar" has been chosen.

Russ Cox is one of the leads on the Go project. What do you like best about Go? Why? Would you want to change anything in the language? If so, what and why?

One use of Zookeeper is a fault-tolerant lock service (see the section "Simple locks" on page 6). Why isn't possible that two clients can acquire the same lock? In particular, how does Zookeeper decide if a client has failed and it can give the lock to some other client?

Replication in the Harp File System Figures 5-1, 5-2, and 5-3 show that Harp often finishes benchmarks faster than a conventional non-replicated NFS server. This may be surprising, since you might expect Harp to do strictly more work than a conventional NFS server (for example, Harp must manage the replication). Why is Harp often faster? Will all NFS operations be faster with Harp than on a conventional NFS server, or just some of them? Which?

Frangipani: A Scalable Distributed File System: Suppose a server modifies an i-node, appends the modification to its log, then another server modifies the same i-node, and then the first server crashes. The recovery system will see the i-node modification in the crashed server's log, but should not apply that log entry to the i-node, because that would un-do the second server's change. How does Frangipani avoid or cope with this situation?

No compromises: distributed transactions with consistency, availability, and performance: Suppose there are two FaRM transactions that both increment the same object. They start at the same time and see the same initial value for the object. One transaction completely finishes committing (see Section 4 and Figure 4). Then the second transaction starts to commit. There are no failures. What is the evidence that FaRM will use to realize that it must abort the second transaction? At what point in the Section 4 / Figure 4 protocol will FaRM realize that it must abort?

Spanner Suppose a Spanner server's TT.now() returns correct information, but the uncertainty is large. For example, suppose the absolute time is 10:15:30, and TT.now() returns the interval [10:15:20,10:15:40]. That interval is correct in that it contains the absolute time, but the error bound is 10 seconds. See Section 3 for an explanation TT.now(). What bad effect will a large error bound have on Spanner's operation? Give a specific example.

Secure Untrusted Data Repository (SUNDR) In the simple straw-man, both fetch and modify operations are placed in the log and signed. Suppose an alternate design that only signs and logs modify operations. Does this allow a malicious server to break fetch-modify consistency or fork consistency? Why or why not?

Memory Coherence in Shared Virtual Systems. Answer this question using ivy-code.txt, which is a version of the code in Section 3.1 with some clarifications and bug fixes. You can see that the manager part of the WriteServer sends out invalidate messages, and waits for confirmation messages indicating that the invalidates have been received and processed. Suppose the manager sent out invalidates, but did not wait for confirmations. Describe a scenario in which lack of the confirmation would cause the system to behave incorrectly. You should assume that the network delivers all messages, and that none of the computers fail.

Distributed Shared Memory on Standard Workstations and Operating Systems Suppose that a simplified version of Treadmarks, called Dreadmarks, simply sent all modifications of variables between an acquire and a release to the next processor to acquire the same lock. No other modifications are sent. What changes does Treadmarks send that Dreadmarks does not? Outline a specific simple situation in which Treadmarks would provide more useful or intuitive memory behavior than Dreadmarks.

Resilient Distributed Datasets: A Fault-Tolerant Abstraction for In-Memory Cluster Computing What applications can Spark support well that MapReduce/Hadoop cannot support?

Ficus Imagine a situation like the paper's Figure 1, but in which only Site A updates file Foo. What should Ficus do in that case when the partition is merged? Explain how Ficus could tell the difference between the situation in which both Site A and Site B update Foo, and the situation in which only Site A updates Foo.

Managing Update Conflicts in Bayou Suppose we build a distributed filesystem using Bayou, and the system has a copy operation. Initially, file A contains "foo" and file B contains "bar". On one node, a user copies file A to file B, overwriting the old contents of B. On another node, a user copies file B to file A. After both operations are committed, we want both files to contain "foo" or for both files to contain "bar". Sketch a dependency check and merge procedure for the copy operation that makes this work. How does Bayou ensure that all the nodes agree about whether A and B contain "foo" or "bar"?

PNUTS: Yahoo!'s Hosted Data Serving Platform Briefly explain why it is (or isn't) okay to use relaxed consistency for social applications (see Section 4). Does PNUTS handle the type of problem presented by Example 1 in Section 1, and if so, how?

Dynamo Suppose Dynamo server S1 is perfectly healthy with a working network connection. By mistake, an administrator instructs server S2 to remove S1 using the mechanisms described in 4.8.1 and 4.9. It takes a while for the membership change to propagate from S2 to the rest of the system (including S1), so for a while some clients and servers will think that S1 is still part of the system. Will Dynamo operate correctly in this situation? Why, or why not?

Wormhole stores the datamarker for a multi-copy reliable delivery (MCRD) flow in Zookeeper, instead of in persistent storage on the publisher side (as Single-Copy Reliable Delivery does). Why does MCRD store the datamarker in Zookeeper?

Existential Consistency. Section 2.2 mentions that causal consistency is a non-local consistency model, and thus can't be checked by the analysis in Section 3. Give a pair of examples, one legal under causal consistency and one illegal, that the Section 3 checker would not be able to distinguish. That is, show by example that the Section 3 approach can't check causal consistency.

Borg's key benefit for Google is that it efficiently utilizes cluster resources. As part of this, it is important to avoid resources being wasted by sitting idle. List and describe at least three techniques that Borg uses to ensure good machine and cluster utilization. What potential negative consequences, if any, does each technique have for (a) non-production/low-priority, and (b) production/high-priority workloads?

Kademlia: A Peer-to-peer Information System Based on the XOR Metric Consider a Kademlia-based key-value store with a million users, with non-mutable keys: once a key is published, it will not be modified. The k/v store experiences a network partition into two roughly equal partitions A and B for 1.5 hours.

X is a very popular key. Would nodes in both A and B likely be able to access X's value (1) during the partition? (2) 10 minutes after the network is joined? (3) 25 hours after the network is joined?

(optional) Would your answer change if X was an un-popular key?

For the design described in Chord: a scalable peer-to-peer lookup service for Internet applications, if two Chord nodes x and y are located nearby each other in the Internet (e.g., they are both in the same datacenter), is the number of Chord hops small (say 1 or 2 instead of log N) when x is looking up a key that is stored on node y?

Distributed Programming in Argus. Starting at the bottom-left of page 310, the paper mentions that a participant writes new versions to disk twice: once before replying to a prepare message, and once after receiving a commit message. Why are both writes necessary? What could go wrong if participants replied to the prepare without writing the disk, instead only writing the disk after receiving a commit message?

Efficient Optimisitic Concurrency Control Using Loosely Synchronized Clocks. The paper mentions that, after a server commits a transaction, the server sends out invalidation messages to clients that are caching data written by that transaction. It may take a while for those invalidations to arrive; during that time, transactions at other clients may read stale cached data. How does Thor cope with this situation?

Naiad: A Timely Dataflow System: Consider the data-flow graph in Figure 3, and assume that the vertices are instantiated as follows:

- A is a distinct-by-key operator, which for inputs of (key, value) only emits the first (key, value) record with each distinct key.

- B is a multiply operator, which multiplies the value by two (i.e., for (key, value), it emits (key, value * 2).

- C is a conditional operator, which sends its output to E if the input value is greater than 10, and sends it to F otherwise.

- D is an aggregation operator that sums all values irrespective of key.

Now assume that we introduce two records in epoch e = 1 at the In vertex: (a, 2) and (b, 6); and one record in e = 2: (a, 5).

Write down the timestamp changes that the records experience as they flow through the graph, e.g., (a, 2): at A, t = (1, []); at B, t = .... You may omit vertices that do not modify the timestamp.

Informally explain when E.OnNotify((1, [])) will be called. You do not need to step through the details of the pointstamp protocol.

Parameter Server: The parameter server model was developed for running machine-learning algorithms like sparse logistic regression (Sec. 5.1).

What do you expect the bottleneck – that is, the most loaded – resource (e.g., CPU, memory, network) to be at the workers and at the servers?

What resource utilization would you likely observe if using Spark's logistic regression (see Sec. 3.2.1 in the Spark paper) instead of the parameter server for the experiment in Sec. 5.1?

Ray supports two programming abstractions: tasks and actors. Which one would you use to implement the reduce() operator in the wordcount dataflow of Lab 1 and why? How would you make your reduce() operator data-parallel?

Plan 9 from Bell Labs List three features introduced by Plan 9 that have not been adopted by today's common operating systems. For each, why do you think the idea hasn't become popular?

MapReduce How soon after it receives the first file of intermediate data can a reduce worker start calling the application's Reduce function? Explain your answer.

Bitcoin Suppose one modified the Bitcoin protocol so that mining only occurs when there are new transactions. If there are no transactions waiting to be placed in a block, then the peers (and miners) do nothing. This change might save some electricity. Describe an attack that this change would make easier than in unmodified Bitcoin.

Memcache at Facebook. Section 3.3 implies that a client that writes data does not delete the corresponding key from the Gutter servers, even though the client does try to delete the key from the ordinary Memcached servers (Figure 1). Explain why it would be a bad idea for writing clients to delete keys from Gutter servers.

Although there's no paper assigned for today, please send us a question about Go (the RPC library, channels, threads, lab 1, etc).

The Remus paper's Figure 6 suggests that less frequent checkpoints can lead to better performance. Of course, checkpointing only every X milliseconds means that up to X milliseconds of work are lost if the primary crashes. Suppose it was OK to lose an entire second of work if the primary crashed. Explain why checkpointing every second would lead to terrible performance if the application running on Remus were a Web server.

When will an EPaxos replica R execute a particular command C? Think about when commands are committed, command interference, read operations, etc.

Suppose we have the scenario shown in the Raft paper's Figure 7: a cluster of seven servers, with the log contents shown. The first server crashes (the one at the top of the figure), and cannot be contacted. A leader election ensues. For each of the servers marked (a), (d), and (f), could that server be elected? If yes, which servers would vote for it? If no, what specific Raft mechanism(s) would prevent it from being elected?

Suppose we have the scenario shown in the Raft paper's Figure 6: a cluster of five servers with the contents shown. Consider the command at index 6 the last one applied to the state machine. Describe a sequence of events that result in applying the commands at indexes 7 and 8 of the leader's log.

Could a received InstallSnapshot RPC cause the state machine to go backwards in time? That is, could step 8 in Figure 13 cause the state machine to be reset so that it reflects fewer executed operations? If yes, explain how this could happen. If no, explain why it can't happen.

Give an example scenario where an application would behave incorrectly if it waited for just the first hop in a chain to commit, but would behave correctly by waiting for the entire chain to commit.

Building distributed systems in the real world have both technical challenges (e.g., dealing with bad data) and non-technical ones (e.g., how to structure a team). Which do you think is harder to get right? What examples can you cite from your personal experience?

Experiences with a Distributed, Scalable, Methodological File System: AnalogicFS. In many ways, this experiences paper raises more questions than it answers. Please answer one of the following questions, taking into consideration the rich history of AnalogicFS and the spirit in which the paper was written:

a) The analysis of A* search shown in Figure 1 claims to be an introspective visualization of the AnalogicFS methodology; however, not all decisions are depicted in the figure. In particular, if I <= P, what should be the next node explored such that all assumptions in Section 2 still hold? Show your work.

b) Despite the authors' claims in the introduction that AnalogicFS was developed to study SCSI disks (and their interaction with lambda calculus), the experimental setup detailed in Section 4.1 involves decommissioned Gameboys instead, which use cartridge-based, Flash-like memory. If the authors had used actual SCSI disks during the experiments, how exactly might have their results changed quantitatively?

c) AnalogicFS shows rather unstable multicast algorithm popularity (Figure 5), especially compared with some of the previous systems we've read about in 6.824. Give an example of another system that would have a more steady measurement of popularity pages, especially in the range of 0.1-0.4 decibels of bandwidth.

d) For his 6.824 project, Ben Bitdiddle chose to build a variant of Lab 4 that faithfully emulates the constant expected seek time across LISP machines, as AnalogicFS does. Upon implementation, however, he immediately ran into the need to cap the value size to 400 nm, rather than 676 nm. Explain what assumptions made for the AnalogicFS implementation do not hold true for Lab 4, and why that changes the maximum value size.


Questions or comments regarding CS 451/651? Send e-mail to liagos@bu.edu.

Top // CS 451/651 home //