The intrinsic computational problem of the N-body system is its
 time-complexity (each star excerts a force on any other
star). In combination with the requirement that, for many realistic
simulations, N needs to be very large, we are faced with an impressive
computational challenge.  The numerical challenge, on the other hand,
is posed by the large dynamic range in these computations. Encounters
between normal stars take place on time scales of hours and at
distances of millions of kilometers, while encounters between neutron
stars and stellar-mass black holes occur on time scales of
milliseconds and at distances of tens of kilometers. In contrast,
dynamical evolution of the star cluster requires billions of years and
these are as important to understand the evolution of the entire
stellar system.  At the same time the nuclear driven evolution of
stars and binaries in the cluster cover all ranges in time and space.
Realistic simulations must, therefore, comfortably span at least
twenty orders of magnitude in time and thirteen orders of magnitude in
space.
Over the past decades many efficient schemes have been developed to reduce the time-complexity of the N-body simulation: Particle-Mesh (PM) and Particle-Particle-Particle-Mesh (P3M) methods [1, 27] Individual time marching schemes, [e.g. 2, 3] Hierarchical methods, such as the Barnes Hut -[4] or the Fast Multipole Method. [5] For all these methods efficient vectorised and parallelized algorithms have been developed. The combination of efficient algorithms executing on very powerful computers allows to increase the number of particles drastically. [e.g. 6] The main problem still is that, despite impressive successes, a large scale N-body simulation requires extended access to the most powerful computer resources in the world. For many research groups this is not a realistic option. Another approach was taken by a team at the university of Tokyo. Over the last 12 years they developed a family of special purpose computers for the N-Body problem, culminating in GRAPE-6, the first computer ever built with a theoretical peak performance of more than 100 Tflop/s, a sustained performance of over 50 Tflop/s, and winner of the Gordon-Bell prize 2000 [7, and references therein] and 1996. GRAPE (GRAvity PipelinE) is specialized hardware to calculate the interaction between particles. It is connected to a general purpose host computer (a workstation) which performs all calculations other than the force computations. At the moment there are two GRAPE-4 boards (the 1 Tflop/s predecessor of the GRAPE-6 and winner of the Gordon-Bell Price in 1995) available at the University of Amsterdam. GRAPE boards are available for other research groups, allowing them to realize their own, permanently available, powerful special purpose N-body computer for a fraction of the cost of (access to) a general purpose system. We propose to develop and realize a N-body computing lab at the UvA, based on a cluster of GRAPES. Taking the new approach of combining a number of workstations with a GRAPE back-end into a cluster, and using the parallel computing expertise and hierarchical methods expertise from the computer science department [8, 28] we will realize an efficient Hierarchical Method based N-body solver in this environment, thus circumventing the "host bottleneck" that was reported by Makino [9, 10]. Part of the research will also be directed to exploit the GRAPE system for higher order (i.e. more accurate) Hierarchical methods and for P3M algorithms. The main astrophysical aim in this project, however, is to study how the many binary X-ray sources and millisecond radio pulsars, observed in the centers of globular clusters, have formed. To do so, a code that combines stellar evolution with stellar dynamics will be constructed in the N-body lab.