Synchronization overhead may limit the number of applications that can take advantage of a shared-memory abstraction on top of emerging network of workstation organizations. While the programmer could spend additional efforts into getting rid of such overhead by restructuring the computation, this paper focuses on a simpler approach where the overhead of lock operations is hidden through lock prefetch annotations. Our approach aims at hiding the lock acquisition latency by prefetching the lock ahead of time. This paper presents a compiler approach which turned out to automatically insert lock prefetching annotations successfully in five out of eight applications. In the other, we show that hand insertion could be done fairly easily without any prior knowledge about the applications. We also study the performance improvements of this approach in detail by considering network of workstation organizations built from uniprocessor as well as symmetric multiprocessor nodes for emerging interconnect technologies such as ATM. It is shown that the significant latencies have a dramatic effect on the lock acquisition overhead and this overhead can be drastically reduced by lock prefetching. Overall, lock prefetching is a simple and effective approach to allow more fine-grained applications to run well on emerging network of workstation platforms.