Contexts in source publication

Context 1
... techniques may make this less of a problem, but it remains the case that one of the costs of GC is likely to be greater memory use. Figure 1.1 shows the basic system interface. Persistent data are stored in a transactional persistent heap, which is located in stable storage, indicated by the gray in the figure. ...
Context 2
... in the interests of clarity and focus, I will concentrate on the aspects that provide durability and atomicity with respect to system failures, and I will note when alternative designs or implementations would limit the ability to implement abort and multiple or nested transactions. Figure 1.2 shows a block diagram of a basic garbage collected persistent heap. ...
Context 3
... before the flip can occur, all the data must be copied into to-space, and any changes recorded in the write log must be applied to the to-space versions. Figure 1.3 shows a new design that supports transactions. ...
Context 4
... obvious solution is to cache a copy of the data in volatile memory. Figure 1.4 shows a version of the design in which the client reads, writes, and allocs data in volatile memory (shown as white) but the garbage collector continues to work on the stable versions of the heap. ...
Context 5
... the collection can complete, this volatile to-space must be written to stable storage. Figure 1.5 shows the new design. The I/O thread has been added; it writes the volatile to-space to stable to-space. ...
Context 6
... design still has one problem: all data are persistent. Figure 1.6 shows the system with the addition of a transitory heap; all of the spaces of the previous design are now grouped together as the persistent heap. The system now has two root sets, one for transitory data and one for persistent data. ...
Context 7
... also repeat the uniprocessor results with the addition of the quartiles. Figure 10.1 shows the dependence of the elapsed time of OO1 on the livesize for the multiprocessor. The y-axis shows the number of seconds taken to complete a single iteration of the benchmark. ...
Context 8
... of the results are similar and support the same general conclusions about Sidney's behavior. Stop-and-Copy GC Concurrent GC Figure 10.2: OO1 Times versus Livesize (uniprocessor) ...
Context 9
... practice, with less aggressive collection parameters, the collection pauses would be even less disruptive since they would be less frequent. Figure 10.7 shows the commit and collection pause times for the uniprocessor and Figure 10.8 for the multiprocessor. These plots are similar to Figure 10.6, but they include the median pause values for commit when the system is using both kinds of collection, and they omit the non-flip pauses for concurrent collection. ...
Context 10
... practice, with less aggressive collection parameters, the collection pauses would be even less disruptive since they would be less frequent. Figure 10.7 shows the commit and collection pause times for the uniprocessor and Figure 10.8 for the multiprocessor. ...
Context 11
... 10.7 shows the commit and collection pause times for the uniprocessor and Figure 10.8 for the multiprocessor. These plots are similar to Figure 10.6, but they include the median pause values for commit when the system is using both kinds of collection, and they omit the non-flip pauses for concurrent collection. ...
Context 12
... minimum livesize, 4.5 MB, is used to maximize the number of flips, but the results hold in general. Figure 10.9 shows the distribution of all GC pauses. ...

Similar publications

Article
Full-text available
Se em nós persiste a curiosidade por flagrar a gênese, embora o trabalho de Cecilia Almeida Salles tenha-nos posto com a devida clareza e amparo teórico que a origem não é senão um grande momento mítico inatingível, compartilhamos com os leitores de Manuscrítica registros de inestimável valor genético: a correspondência que Loyola enviou a Salles e...

Citations

... If we are able to bring the CPU overhead of our system (the no write case) down such that it plus the CPU time involved in actually doing the write is less than a rotational delay, then we should see almost a factor of two improvement, since we will avoid missing a disk revolution. We have observed such a speedups before using similar benchmarks [9]. For clues about how to reduce our overheads, we first note that the overheads without durability are minimal, thus speeding up Java or making write logging faster will have little effect. ...
... Prior to this work, one of the authors participated in the Venari project [5] , of which one goal was to provide transactions for Standard ML (SML). A discussion of the low-level aspects that system (called Sidney) can be found in Net- tles [9]. Not surprisingly, our current work draws strongly on this prior work. ...
Conference Paper
Jest is a Java VM extended to support transactions and general-purpose persistence. Jest allows Java programmers to manipulate any object using transactions and provides resilience to machine failure for these objects. Jest extends Java's current emphasis on safety and reliability to the safe and consistent management of permanent state. Our additions include syntax for transactions and run-time support for durability and atomicity. General-purpose persistence-the ability to make arbitrary kinds of objects persistent-is a key aspect of the design. We provide orthogonal persistence in which any object can be made persistent without regard to type. We do this using persistence-by-reachability, in which an object becomes persistent if it is reachable from a special persistent root. An important aim of our implementation is to explore the techniques and tradeoffs that arise when implementing persistence in a runtime system based on mark-and-compact collection. Having previously studied designs based on copying collection, this work allows us to explore additional parts of the persistence design space. Details of the implementation are provided in the paper. We have tested Jest on a debit-credit benchmark derived from TPC-B. Our system achieves a rate of 83 TPS, very close to the limits allowed by our disk and underlying logging system. Tests of the Java compiler compiling itself both with and without our extensions, suggest that, for applications that do not use transactions, our extensions result in a slowdown of about 7% compared to the original Java implementation. We suggest several possible ways of improving this result
... If we are able to bring the CPU overhead of our system (the no write case) down such that it plus the CPU time involved in actually doing the write is less than a rotational delay, then we should see almost a factor of two improvement, since we will avoid missing a disk revolution. We have observed such a speedups before using similar benchmarks [9]. ...
... Prior to this work, one of the authors participated in the Venari project [5], of which one goal was to provide transactions for Standard ML (SML). A discussion of the low-level aspects that system (called Sidney) can be found in Nettles [9]. Not surprisingly, our current work draws strongly on this prior work. ...
Article
We present a design and implementation of transactions and general-purpose persistence for Java. These additions allow Java programmers to manipulate any Java object using transactions and provide resilience from machine failure for these objects. This extends the range of Java applicability into domains where reliability is of paramount concern; for example, network-based banking. Our design and implementation is a significant addition to Java. It extends Java's current emphasis on safety and reliability to the safe and consistent management of permanent state. Our additions take the form of syntactic extensions for transactions and runtime system support for durability and atomicity. Support for general-purpose persistence, the ability to arbitrary kinds of objects persistent, is a key aspect of the design. We provide orthogonal persistence, in which any object can be made persistent, without regard to type. We also provide persistence-by-reachability, in which an object becomes pe...
Conference Paper
Comparative experimentation is increasingly important in computer science, but performing such experiments can be challenging. The paper presents a set of experiments that compare the performance of two persistent storage managers, and answer the question of whether the safer storage manager has performance comparable to the less safe one. This comparison was difficult for a number of reasons, among them: relatively few programs using either storage manager existed and no established benchmarks existed, and the two techniques are incompatible at the source code level, thus making a direct comparison impossible. In particular one storage manager used a malloc-and-free style of dynamic storage allocation, while the other used a high performance concurrent garbage collector. A number of approaches were used to overcome this difficulty. The most novel approach involved tracing the memory management of a production program that used the malloc-and-free based storage manager and then replaying the trace in an environment that allowed garbage collection and malloc-and-free to be compared. The study represents the most extensive study of a garbage collected persistent storage system to date
Article
Flash memory is a solid-state semiconductor memory technology that has interesting price, performance, and semantic tradeoffs. We've developed Gordon, a generalpurpose persistence system for Standard ML, that uses Flash mapped into the virtual address space as its stable storage medium. Flash supports a write-once/bulk erase interface which makes it difficult to support update-in-place semantics. In addition, Flash chips are only guaranteed to survive a limited number of erase cycles. Gordon has been designed to overcome these difficulties, and our performance analysis demonstrates good performance and reasonable lifetimes for appropriate application domains. 1 Introduction Flash RAM is a semiconductor memory which offers significant new price, performance, and semantic tradeoffs for the "main" memory of computer systems. Flash has read performance and density comparable to DRAM, but unlike DRAM, data stored in Flash is stable and is not lost on power failure. Compared to disk, the m...